07-08-2013, 10:57 PM
(This post was last modified: 07-08-2013, 11:12 PM by Xinef.)
I think you can look at Scala in two ways:
- As an Object Oriented alternative to Java
- As a Functional Programming language (I'd call it idiomatic Scala)
Object Oriented Scala is just what Java would be if it could evolve quickly and wasn't held back by backwards compatibility and other aspects that make it a good choice for business, but bad for new, innovative projects. Java is an old language and it shows. Scala is more concise, has many features found in modern languages and evolves quickly. I love the fact that it is strongly typed, but with type inference you rarely have to write down the types. Operator overloading is really neat for graphics, physics and similar stuff. Traits are a great thing - they're like interfaces in Java, except they come with the implementation, so just adding a trait to a class is enough to add the implementation as well. Lambda expressions are another thing that Java was missing and the workarounds were needlessly bloated. Some sources claim that Java projects rewritten to Scala require three times less code. I guess it largely depends on the project - I don't see so much difference in my own projects, but it may be because I still have many habits from Java. Still, I'd consider Scala code to be cleaner and easier to understand, assuming the reader knows both Java and Scala equally well.
And there is idiomatic Scala. Theoretically it is possible to join Object Oriented programming with Functional Programming, since they are not exactly contradictory. You just need immutable objects. In any case, Scala supports both programming paradigms, so you can have some parts of your project written in an imperative manner and some parts written functionally. In fact, when dealing with most Java libraries you'll need imperative code, since the libraries themselves assume you do. Still, there are benefits (and disadvantages) to writing everything else in a functional manner. Most books on Scala recommend that you try learning Functional Programming, and the people who created Scala are mostly strong supporters of these paradigms.
So... what do you gain by restricting yourself to immutable objects and pure, referentially transparent, functions (and that's the core of Functional Programming)? The code is much easier to parallelize (there's no risk of concurrent modifications, since after the object is created, there are only read operations performed on it). In fact, the compiler itself could theoretically change your code into a multithreaded one, though this is still mostly a field of research. Other than that, debugging is easier since to monitor if the state of the program is correct you only have to monitor the values returned by functions. Also, some claim that functional code is easier to read and more intuitive, since it resembles the way you'd express the algorithm in mathematical terms.
I personally am still learning how to properly use idiomatic Scala and my habits from Java and C++ are still very obvious. I often find that my code could be written much more concisely and clearly if only I tried to be more strict with following the paradigms. It's still quite a challenge to take some of my imperative code and try to rewrite it in a more functional manner. And I'm still considering if the benefits are worth it (the main disadvantage is that, mostly due to immutable state, the memory requirements are often higher and the computational complexity for some algorithms is higher - for example if you add a value to a mutable tree, only one node is modified. If you add a node to an immutable tree, many nodes need to be duplicated. Still, for a balanced tree, the operation is O(log(n)), but the constants are higher. So... I just plan to write some relatively complex benchmarks in both manners and simply compare them for myself - that's one of the things I plan to do in my master's thesis.
Anyway, I'm glad you asked
If there's anything else you'd like to know, fire at will!
Heh, your impressions sound almost exactly like mine, including concerns about the overhead of immutable structures and weighing that against the potential benefits of improved parallelism, as well as difficulties in adapting to idiomatic functional style. I've tried many times over the years but it just never quite sticks for me and I find myself back in more traditional procedural/OO languages. Maybe someday...
But yeah, even non-idiomatic scala looked way better than plain java. Java's stagnancy was a major contributing factor in choosing C# instead. C# started as a java knockoff, but in that time has continued to develop and is now superior in every way.
Cool that you're doing your master's on this. I'd be interested in hearing the results. I was seriously considering doing a master's myself, but had a job prospect waiting so opted to go straight to industry instead. Sometimes I wish I'd stuck it out for the extra year or two and done it though. Oh well.
Parallelism is a particular interest of mine and I've done some work trying to add it to things that are difficult to parallelise, like an SNES emulator and a lock-free task scheduler for XNA before it went defunct.
Anyway, I'll stop now before I drag your intro thread too off-topic. I think the Technical forum could benefit from more discussion like this.
I wouldn't go as far and say that C# is in every regard better than Java, though to be honest I don't know too much about C# since I'm mostly a Linux user and I try not to rely on anything coming from Microsoft. I'm not prejudiced against Microsoft on principle, but I've had enough negative experiences with their technologies and I've seen enough bad design decisions made by them that it's hard for me to be objective on the matter. I have to say that C# has many strong points and advantages over Java. In fact I can see that Scala was greatly influenced by C# (not to mention the fact, that it is possible to run Scala on .NET, though I've never tried that).
Anyway, when I was in high school I already had some C++ experience and that's roughly the moment when I started learning Java. When I try to think of the reasons why it got my interest, it was probably the fact that it was so independent from the platform/hardware, when compared to C++, which solved many problems, since most of the problems with C++ were platform related (i.e. "how to make that Hello World of technology XYZ run?"). With Java, most technologies simply worked out of the box.
So, even if Java itself isn't the best language around, the JVM is quite an awesome achievement. Once again, I'm hardly an expert when it comes to .NET, but from what I've seen JVM has some advantages over it, and especially over Mono. I guess many benchmarks that compare Java to C# are biased, but the most reliable ones I could find showed that Java is in most cases slightly faster than C# on Linux and Mac, while slightly slower on Windows. This might be related to the fact that .NET is tightly integrated with Windows while JVM is treated by Windows just like any other 'foreign' software. When compared to C++ the overhead of virtual machines greatly depended on the given problem, but on average I guess it was somewhere around 20% slower... so yeah, C++ is Rainbow Dash.
By the way, a long time ago I had this idea to design a pony for each programming language (I've seen web browser ponies)... I was thinking that languages that run on virtual machines would be depicted as pegasi to show their independence, while declarative languages could be depicted as unicorns because they're kinda magical... you know, black magic happens within...
So, anyway, the main focus of my thesis is real time ray tracing, so high performance is a priority. Ray tracing is an embarrassingly parallel problem by itself (you can trace each ray independently), but some optimizations make things more complex, e.g. you can try to use the information gathered by one ray to speed up the calculations of a nearby ray.
So I was wondering if the ease of parallelization would mean that the functional approach would actually result in a better or at least comparable solution to a more traditional imperative approach. So I plan to write the functional implementation in Scala, while the imperative implementation would be in C++ (since it's the staple choice for high performance imperative computing). It might be possible to implement a ray tracer in C++ while adhering to the principles of functional programming, but since C++ does not actively support such endeavors, I guess it wouldn't be a good decision (it might be faster than the Scala version, but it would be a lot harder to maintain and develop).
There's also another thing - with performance in mind, one would be foolish to completely ignore GPU. So I'm definitely going to try using it to speed things up. I'm not sure if I'll do this for my master's thesis, since it's already quite an ambitious one, but I'll definitely go for it at a later time.
I've already experimented a bit with GLSL shaders and I'm looking into CUDA and OpenCL. So that would be the next step, though this is quite an independent problem from the imperative vs. functional and Scala vs C++, so I guess trying to include it in any way into my thesis would only unnecessarily complicate things.
And there's also NVIDIA's OptiX technology, which obviously might be very interesting to look at, though I haven't delved into it yet.
Quote:Anyway, I'll stop now before I drag your intro thread too off-topic. I think the Technical forum could benefit from more discussion like this.
I guess it could, though I don't mind it happening in my thread since right now we're switching topics quite rapidly anyway. I was going to start a topic about how much I love DAGs (Directed Acyclic Graphs) there. Mostly due to the way how perfectly they model human communication. If forums were designed like DAGs, it would be much easier to keep off-topic in line... err... in DAG. The linearity of threads is the problem.
07-09-2013, 04:03 AM
(This post was last modified: 07-09-2013, 04:09 AM by Destineer.)
I completely agree about wanting to avoid Microsoft stuff, but Java is also in the position of being controlled by a mega-corporation, namely Oracle. So I suppose in the end it's just picking your poison. Mono and OpenJDK are both excellent alternatives to the "official" runtimes provided by the respective companies. Our company has used mono on linux with success, and monotouch for android/ios is quite good as well.
Last I recall, the JVM does have the most sophisticated garbage collector in existence, but mono's has been improving steadily for a while now, so the difference isn't too severe, especially when you exercise modest restraint when allocating new objects.
I've done a lot of work with .NET, including low level stuff manipulating MSIL bytecode, and it's VERY well designed. That's less to do with Microsoft though, and more to do with Anders Hejlsberg who really knows his stuff. He was one of the minds behind turbo pascal, which had incredibly fast compilation times, even by today's standards. I consider C# and the CLR to be more a product of him and the other engineers, with the unfortunate side effect of them working for Microsoft at the time. Mono allows all the benefit without the windows aftertaste :)
I hear you on not wanting to maintain a c++ codebase for something like that. It's been my experience that the speed advantage is fickle and not really worth the dramatically increased development time unless you have hard realtime requirements that simply can't be met any other way, which is relatively rare. Even in the games industry, game logic is often written in Lua or other scripting languages that are significantly slower than java or .net, simply because performance isn't the bottleneck there and they can afford the overhead for the dramatic productivity gains.
John Carmack has done a lot of research into realtime raytracing, and his findings are interesting. It actually doesn't parallelise quite as well as one might think at first glance, at least for reflective surfaces. The main problem is cache coherency. When the rays are initially cast, they're very near each other and likely to end up accessing the same data once they hit an object. Once they reflect off of a curved surface though, they diverge rapidly, and each ray will hit wildly different locations, causing the CPU cache to start thrashing and performance drops like a rock. I wish I could remember which talk/paper it's from.
My views on Mono and OpenJDK might be a little outdated then. It is quite likely that now they are much better than they used to be.
I don't know too much about JVMs inner workings, but I heard that it has many advanced algorithms, including ones that use the runtime statistics to somehow optimize things. Like finding out which variables should be stored in registers instead of memory and maybe ways to improve cache hit to miss ratio.
I know many game developers and even whole communities dedicated to game development claim that C++ is the only reasonable choice in game development. I guess they're just performance freaks, fighting for those FPSes like they're the most important thing.
I personally hope to achieve real time motion blur, so that I can have a constant 30 FPS or maybe even 24 FPS like movies in cinema
I guess it doesn't hurt to optimize a few bottlenecks and this might be enough to gain most of the possible speedups without having too much low level codebase. Many Java libraries have some native code for that purpose. In ray tracing I guess things like loading models and building the scene graph don't really need to be optimized. Only intersection detection and shading are really critical in this regard. Here, performance might actually be worth fighting for, since it might make the difference of a few months or maybe a year, when it comes to the point in time when real time ray tracing becomes a viable alternative to rasterisation.
Though it is indeed a very bad decision to write the whole game in low level code, when you haven't yet prototyped the gameplay to find out if it is good. Or if this may result in the whole project being a failure because of the amount of time and money it took to develop the game.
I was considering learning Python, mostly because of Blender, but this whole dynamic type system is a bit unfamiliar to me, a person used to well defined static types.
By the way, some people think that scripting languages like Python are a good entrance point for people who want to learn programming, because they concentrate on algorithms instead of hardware related issues. Other people claim that this causes people to have very poor understanding of computer's inner workings and therefore causes many errors, so they recommend languages like C++. I guess C++ is suffering from a bit too much featuritis (ok, if you do succeed in understanding C++, you are on a good way to understand any other language, but I guess learning C++ takes more time than learning any other language)
But I've known a person, who... believe it or not... was amazed how much learning assembler helped him in learning programming in general. He started with C++ I think, though had problems understanding it. Then he tried assembler and finally understood how it works. So... I guess it completely depends on the person and there is no single language that is good for everyone
I have a number of resources for both real time and offline ray tracing, but I don't remember having anything from Carmack, so if you do happen to find it, let me know. Right now, my main source of knowledge is Physically Based Rendering
which is about offline ray tracing and they do mention cache coherency quite often. There are many ways to speed up the initial rays, but indeed secondary rays are much worse.
One of the optimizations I was considering was to somehow restrict the camera's movement, so that most of the rays can be reused from previous frames. This would only work in some kinds of games (mostly the ones with bird's eye perspective), because it's completely impossible in first person or third person. Though I'm mostly interested in strategy games, and there it would be perfect.
I'm having a hard time finding the source I remember, but this discussion springing from a comment Carmack made on arstechnica about raytracing is pretty interesting.
They mentioned eye tracking, which is indeed something that would be extremely useful in computer graphics in general, but is quite unlikely to be available anytime soon. While eye tracking hardware and software is much cheaper now than it was some time ago (because some students decided to design and sell a solution that was orders of magnitude cheaper than the business solutions that existed before), it's still unlikely to become widespread amongst players, unless some peripherals like kinect and the like include a camera dedicated to eye tracking.
Anyway, I'm well aware of the many people who claim that ray tracing is overhyped. As far as I've seen they mostly claim that for a given processing power, rasterization generally gives better results. A common argument is also the fact that images don't need to be photorealistic or physically based, since players don't see a difference anyway.
I think the main advantages of ray tracing are the flexibility of defining the scene (you are not restricted to triangles and point lights, it is even possible to model semi-transparent volumes). Global illumination is another thing. While many real time ray tracers still ignore indirect lighting, a well written ray tracer can simply decide whether it is worth ignoring it or if it should try to measure it, based for example on the computing power available. And obviously ray tracing allows many problems to be solved in a simpler, more intuitive way than engines based on rasterization, since ray tracers can solve them physically based, usually by sending more rays, while rasterizers don't have an obvious way to add shadows, reflections (especially ones that cannot be represented as a single transformation matrix), and many other things.
So ray tracing engines could be easier to maintain and extend than rasterization based engines. This could be important if for example other trends in computer graphics cause changes that make older engines obsolete. Procedurally generated graphics for example - if instead of describing geometry as a mesh of triangles, you describe it as a function that can generate geometry on demand, it might require engines optimized with that in mind.
Anyway, since a ray tracer doesn't have to use primarily (or only) triangles to describe surfaces, I was wondering if using Bezier surfaces as the main way of describing models would be a good choice. Many triangles could be replaced with a single surface, so even if finding the intersection between a Bezier surface and a ray takes more time than finding an intersection between a triangle and a ray, the fact we'd have to check fewer intersections could actually make it faster. And we gain smooth surfaces no matter how closely we look at them.
I guess I'll start with triangular meshes though, simply because it's easier to produce them in Blender
modeling worlds with non-triangle based geometry is hard simply because most modeling software assumes you want triangles (or quads - some people love them, some hate them).
I think another interesting application of raytracing is the ability to have fields of view that exceed the 180 degree limitation of planar projection. I always thought it would be cool to play a game where you have a full 360 degree omnidirectional "eye" that can see in every direction at once. It would be really disorienting, but fascinating at the same time.
Fun Fact: I read your user title as "Fluttershy is not bisexual".
There's a number of ways to express Fluttershy's ninja skills, such as Flutterninja, Ninjashy and Fluttershinobi... but I thought Shynobi sounded best.
And since there's nothing wrong with not being bisexual, I guess that's alright.
Anyway... for some reason I love ponies in kimonos and generally in Japanese outfits. I have a number of ideas for a country to the far east from Equestria, where ninja-ponies live, including some intricacies of their culture and the ruling family of Kirins (pretty much an equivalent of alicorns). I still don't have any ideas for a game set in those settings though.