But if you claim that Java is always fast enough for example, then I think we're having a strong disagreement. Nobody should optimize representations of Pi or count CPU cycles by default. Many many programmers (in my perception at least) are extremely dissatisfied with today's state of computing, and the reason why we got here is the popular opinion that we don't need to worry about performance. Today I get depressed whenever I need to open Visual Studio, and I don't even know that I use any more functionality that wasn't available 20 years ago. Did you see how fast VS6 started on a machine from almost 20 years ago. When I browse the Web, open my Windows File Explorer, open Photoshop, open Visual Studio, open just a graphical application that needs GPU acceleration, or do whatever thing that should not be an issue, I find that Parkinson's law very much applies. > Which is why I find so ironic that many still think only languages like Assembly, C or C++ have a place in IoT on devices like ESP32. If someone has the latest and greatest already available for licensing, just buy it and get on with making amazing games. Tldr The consoles are all the same because AMD/nVidia are so much better at hardware design than Microsoft/Sony/Nintendo that it's not even funny anymore. Having one company able to provide chipsets for any and all applications, including gaming at a practical whim seems pretty amazing. I know it sounds less glamorous, but honestly, it's the most remarkable thing I've ever seen in the history of modern manufacturing, short of landing on the moon. They tell AMD what they want to do and AMD eschews an existing chip to do it for them. Why would you reinvent the wheel when someone has already spent decades making Ferrari's and is begging you to use their engine at a discount, bulk manufacturing included?īack during the PS1, Sony had a heavy hand in the graphics card design. The complexity is just too much for them to reasonably pay to do so. Sony, Nintendo, and Microsoft are no longer able to design the latest and greatest chipsets. The fact that they all found what they needed in a similar architecture footprint is good for hardware design, good for developers, good for business, good for porting, good for optimization, and finally, good for performance. It's the culmination of decades of hardware and software learnings, standardized into the optimal hardware for graphics and game performance. The fact that all modern consoles and PC's are similar should be applauded. Each console was a massive exploration into what could be done and who could make the best SDK for that hardware to entice developers to make games. The now defunct fixed function pipeline was still being formalized in PCs, let alone consoles. No one was selling you a pre-built stable chip. If you wanted 3D rendering, you had to do it yourself. I think that was what struck the death knell for Sega.Ĭustom architecture was prevalent back then because there weren't any real standards for developing a system that could render 3D graphics. The Saturn which was out at the same time always struggled to keep up because it was so hard to develop for, even though on paper it was better. With various other bits of cunning I eventually got it to the point where I broke the manufacturers specs for whatever Sony said the PS could do per-second (memory is a bit fuzzy about what those specs were, but I remember myself and the team being pretty damn pleased at the time). So, I'd use assembly instead of C and try to fill those NOPs with other actual operations that didn't need the memory being requested, essentially doing hand-crafted concurrency.īecause loading and storing from/to memory was such a common operation, this would make the code very, very hard to maintain, and sent me slightly crazy for a while! Often it meant doing x, y, z operations (for 3D processing, like vector multiplication) concurrently, but wherever the the NOPs could be reduced, more could be done. Reading from memory was slow, it took 4 cycles, which normally would be filled with NOPs by the C compiler (1 load instruction, 3 NOPs). Then I'd 'hand interleave' asm operations. Access to the scratchpad would take 1 cycle. I got to the point where I'd fitted the entire graphics engine and animation system into 4K, so it would fit in the instruction cache, and moved as much regularly used data into the 1K scratchpad as I could fit (Yes, an L1 cache that you decided manually what to put in it!). Originally in C, then in MIPS assembler to get as much perf as possible: with a fixed target the difference for your title would be mostly down to the performance of the graphics engine. The first title I worked on I was building the graphics engine and animation systems. I remember developing for the PS back in 1996-1999.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |