“It’s going to be really tough to convince [new developers] to be technically clean when they work.”
When Sony announced that PlayStation 4 would include 8GB of GDDR5 RAM, developers and consumers alike revelled at the possibilities afforded by such a sheer amount of memory.
But the amount of RAM available in next-generation consoles could lead to developers failing to optimise code and spawn a generation of ‘lazy’ game creators, developers working on PlayStation 4 and ‘other next-generation consoles’ have warned VideoGamer.com.
Multiple developers working on next-gen titles agree that there is “absolutely” potential for dev teams to fail to optimise their software due to the huge amount of memory available, telling us that developers will have to be “very careful not to become sloppy”.
“It could happen, I have to admit,” Eidos Montreal producer Stephane Roy told us when asked about such a risk. But the temptation to cut corners with code will likely come from “kid” developers, Roy suggests – junior coders who enter the industry during the next console generation – rather than veteran developers.
“Probably not from people who used to work here on PlayStation 2 and stuff like that,” he continues. “But let’s say you’re a kid and you start on these platforms, I have to admit that it’s going to be really tough to convince them to be technically clean when they work, and optimisation and stuff like that. So I can see it happening. It’s possible.”
Console developers have typically had to optimise code to achieve results on a relatively small amount of memory. The Xbox 360 features only 512MB of GDDR3 RAM, with the 512MB available in PlayStation 3 divided between system and video memory – a minuscule amount compared to that available in high-end gaming PCs.
By comparison, PlayStation 4 offers 8GB of GDDR5 RAM, with rumours suggesting that Microsoft’s next-generation Xbox could include 8GB of slower DDR3 memory.
But with such a large amount of memory available to drive the software, and the widely-publicised similarities between PlayStation 4 and PC, there may be less of a necessity for developers to optimise their code for console – and a greater risk of developers failing to make the most of next-gen console hardware.
Linus Blomberg, CTO and co-founder of Just Cause developer Avalanche Studios, agrees that such a risk exists.
“Absolutely,” he replies, when asked whether there is a risk of some developers taking shortcuts next-gen, “but that’s not just a bad thing.
“It also means that games that [don't] need to push technical boundaries will be easier and quicker to develop, for instance, most indie productions. For us as a AAA open-world games developer however, we must be very careful not to become sloppy…
“It’s both a risk and an opportunity, depending on what kind of games you develop.”
Ensuring developers don’t take shortcuts with their code is “a question of education”, Roy adds, who says that the game’s “technical director will have to be really careful” to keep on top of development. But under circumstances that may complicate development or waste time, Roy believes that there may be valid reasons to leave code unoptimised.
“From a production point of view, we will have to find a balance between should we optimise the job, because maybe it’s just useless to optimise it,” he continues.
“Let’s say the kid is creating something amazing quickly. It doesn’t run on PlayStation 3 but it runs perfectly on PlayStation 4. Should we optimise it? Maybe not, because it’s running well, there’s no technical problem and we save time. If the problem is we’re too messy and we don’t optimise and finally at the end we can not give you an interesting gameplay because we’re not good to optimise, now there’s a problem.”
However, Blomberg believes that the risk is only temporary and typical to the nature of console development (“it’s the same challenge with every console generation,” he says, “so we’re used to counteract[ing] it”) – a notion shared by Just Add Water CEO Stewart Gilray.
“Thing is, that happens every generation,” Gilray says about the risk of failing to optimise. “It’s the sort of thing that within a year will be knocked on its head…
“We had the same problem going from PS1 to PS2, from 2MB to 32MB, then from 32MB to 256MB in PS3, and then on Vita you had 512MB, you know?
“On [PS4] you’ve got 8GB, but it’s just the old Moore’s Law thing; it’s incremental scale increasing. There’s other things we’ll have on PS4 as well: the faster GPU and CPU, and the way they work together and compute stuff.
“Again people will just be lazy and not optimise their code enough because they’ve got hardware that will drive it as is. That’s what console development is all about. The first [method of] getting something out there, they say, ‘How do we do that? Okay, we can make it better the next game.’ And the more games that come out look better and better.”
Gilray appears to agree with Blomberg and Roy that independent and junior developers could be the most susceptible to taking shortcuts with their code, however.
“I think indies/new developers might be complacent in the first year or so with 8GB,” he says, “but once they start to learn about it and what it can do for them properly by optimising, I think we’ll start to see what we see in every generation of hardware. Games that look great now will look amateurish in two or three years time. And that’s just experience.”
A major difference between this generation and the last, though, appears to be the proximity between next-generation consoles and PCs.
With the margin between PC and console development growing tighter, and the assumption that some multi-format dev teams will decide to simply port their PC code to console, there could be a risk that developers fail to optimise their code to run more efficiently on consoles, leading to software that fails to make the most of next-gen hardware.
“PC developers have traditionally been much sloppier about optimizations in general, which has come back and bit them when they’ve tried to port their games to consoles,” says Blomberg. “This time they’ll have it easier, but it also gives developers with a console background like us an edge, because we are more used to pushing the hardware.”
And when it comes to ‘pushing the hardware’, Blomberg believes those 8GBs will play a vital role in preventing bottlenecks created by the rest of the system.
“One way [to make the most of next-gen hardware] is to work a lot more with caches instead of reading from disk or generating data on the fly,” he says. “Disk reads will always be slower than memory accesses, so that’s one way the extra memory can speed up performance. For us that deals with enormous amounts of data this is a very welcome change.”
The inclusion of 8GB in PS4 was a surprise to third-party developers, too, Gilray suggests, who had been working under the expectation that final retail hardware would only include 4GB.
“We were told [PS4] was 4GB originally, and we first knew it had 8GBs when Mark [Cerny, PlayStation 4 lead architect] said at the event’s stage, ‘And it has 8GB of memory.’ We’d had kits at that point for a good while.”
Only “a couple of really close first-parties” knew PS4 would feature 8GB prior to the console’s announcement, Gilray suspects, “but I think most third-parties, if not all third-parties were like, ‘Yeah, 4GB, awesome, can’t wait.’”
Sony hasn’t yet publicly stated how much of PS4′s memory will be available for developers to use in their software or how much will be reserved for system resources, but Gilray suggests that developers may be given the option to use all 8GB.
“The added bonus [with PS4] is [that Sony has] already ring-fenced the system memory away from the game memory, so there’s none of this business that we had with PS3 of having to share memories. When you press the PlayStation button on a PS3 game, you get the basic XMB up [but] to do anything you have to quit the game, because of the memory for it. With PS4 we don’t have that because the system memory is already ring-fenced for itself.”
But regardless of the amount of usable memory available in next-generation consoles, Roy believes that developers will need to be “smart” about the way in which they approach development, saying that “it would be stupid to think, ‘Alright, it’s more powerful, so let’s do stuff bigger and bigger.’”
“We will have to be careful because it would be easy [for us to say], ‘it’s more powerful, let’s have more food on the plate’,” he says. “It would be dangerous for the cost of the product. Let’s say you go into a restaurant and now you have more money. There is a maximum of what your stomach can receive, I guess. So it’s the same thing… We have to be smart.
“That said, I think now with this technical freedom – it’s not unlimited but we have much more freedom – I think now, the designers and artists will be able to really do what they have in mind. Too often there is a technical restriction and it’s not exactly the gameplay experience we want to give you. I’m pretty sure the artists and designers are really going to find a way to [fill] the memory, but we should be able now to create a game that what exactly we have in mind should be in front of you.
“So [the amount of RAM] is really, really cool,” Roy continues, “but let’s be smart and let’s make sure that we’re going to use this next-gen to support the gameplay, the fun factor, and the experience we want to give you, and not just be impressive for the technical aspect. If it’s not fun – even if it looks great, even if the physics are realistic – if it’s boring, then we’ve failed.”