An Interview with PcGames.de about PS4 and the next gen

An Interview with PcGames.de about PS4 and the next gen

2013 Housemarque

We’ve recently answered a few questions asked by PcGamers.de about the next generation of consoles that will be unveiled today and tomorrow at E3 2013.

The guys have been so nice to let us republish the whole interview in English. Enjoy!

TDB: Hello there. I’m Tommaso, and I work at Housemarque as a Community Manager since a few months. I’ve been collaborating with Housemarque before, as external freelancer, but since the beginning of the year we decided to focus a bit more on our fans, so it made sense to have me around the office a bit more. What we are trying to do is to build a more personal relationship with the people playing our games and bit by bit I think we are moving in the right direction. Housemarque fans are a really dedicated crowd, so it’s often a pleasure to discuss with them. Jere is one of Housemarque’s Lead Programmers, he’s answering the technical questions.

The Jump from PS2 to PS3 was a very obvious one – we got HD-resolution, HD-textures, a competent online-environment and overall a dramatically improved fidelity. The step to PS4 does not seem as steep. What will be the biggest, actually noticeable difference when we play Next-Gen-Titles? What can PS4 and XBO do significantly better than their predecesors?

TDB: The main difference is availability. Recent PC games have the edge over the current generation of console, it’s quite hard not to see that, but developers still have to develop them keeping in mind platforms that are seven years old. When PS4 and Xbox One will be on the market, many of these limitations will be lifted, so developers will be free to introduce the kind of technology that can actually have an impact on gameplay, knowing that everybody will be able to experience it in the best possible way.

A key differentiator of this new generation of console from high-end PCs will be the services and exclusive features: think of PS+, or take for example the possibility to suspend a game at any point – which incidentally is my favorite feature on Vita – that simple change will completely modify the way people experience games in the living room. Also, the possibility to share footage on the fly will make certain games so much more enjoyable on console, removing all the obstacles one can have on PC.

PS4 and Xbox One feature those flexible Compute Units that can be used to calculate graphical effects, but also can provide compute-power for physics and more. This is still very vague for me. How exactly does this impact game-development and fidelity? What effects and applications will these CUs be used for, other than dropping a million Viagra-pills from the sky?

JS: The compute units can be used for any sort of task that is high in computation and bandwidth but low in logic. What I mean with this, is that they are best suited for smaller tasks that need to be repeated for a big data set. As you said, physics is one major area, but even within it, the CUs don’t actually do “physics”, they do a lot of smaller tasks that together result in physics simulation. This includes tasks like collision detection, rigid body update, constraint resolution etc. The CUs can fit any such numerically heavy task. Some examples are soft body simulation, fluid/gas simulation (particle or grid based), advanced audio processing/mixing, driving more advanced animations, more complex AI/path finding, more complex visibility algorithms (seeing only partially through bushes, for example) and so on. Sounds boring, but they can also be used to boost graphics and visual fidelity; they can fit certain algorithms better than what the vertex/pixel pipeline can. While the CUs have been available for the PCs for some time now, the availability is hugely increased with the new consoles (because everyone has them). Also their concurrent use alongside the graphics has been streamlined, so it’s easier to interleave compute tasks with graphics.

Something that is already being teased and – on the pc-market – tested are games that need an online-connection, even though they are not necessarily offering only multiplayer. So far, see Sim City on PC, I am a bit skeptical about that. But a persistant and evolving gameworld sounds cool. What are your thoughts on that feature and where do you see its best application?

JS: The best applications for us would be ones that augment the game experience (be it single, or multiplayer) without coming in the way. Their use should be optional, so that the game experience is still good enough when playing offline.

The new consoles all seem to incorporate a lot of background-activities to handle updates, downloads and stuff like that. This is immensely convenient for the end-user, but are there also some impacts on the development-side?

JS: Any new platform has some kind of impact on development. Though it seems like the goal for the background activities for the console manufacturers is that they would be relatively easy to incorporate into the game. This will mean that for a little extra effort the user experience is greatly enhanced.

What are the biggest challenges in developing for the new generation? A lot of voices from the industry point to the rising production values and scope of next-gen-titles, but how do you as a smaller developer evaluate this?

TDB: I think we will witness a progressive disappearance of middle tier games, so there’s definitely a push towards the higher end of the spectrum. In that sense costs can increase, if animations needs to look better, if texture need to look better, etc. In absolute terms, teams will not double – we are already at a tipping point. One thing we will see more is people trying to step back from realism a bit, because if you go in that direction the efforts needed to make your game look 10% better are not proportional to what you need to spend to achieve that.  In a way that’s good, in the coming years we will see a lot of games that are driven by style as much as technology. You don’t always have to make something “realistic” to make it look cool.
 
Speaking of smaller developers – Sony seems to be very focused on getting Indies onboard. A development I am very excited about. How might this affect the gaming-landscape a few years down the road?

TDB: These past years have shown that the indie community is not only gaining traction, but is also vital for innovation. Certain games that wouldn’t make sense as AAA titles can still get represented by smaller, authorial productions that are maybe reduced in scope but can explore aspects that blockbusters can’t.

Also, it’s pretty clear that there’s a gap in the way games are priced: retail titles for 65€ and downloadable games for 10-15€. It doesn’t make much sense anymore, when every game becomes “downloadable”. There’s a lot more space for price flexibility there, and consumers are actually demanding for it – see how much experimentation is going on in the PC space. We witnessed really crowded release seasons (think of last October-November and March), it’s clear that more flexibility on the pricing (and business models) can help games sell outside the holiday season.

Next-Gen-Games will be able to utilize “second screen”-devices – tablets, smartphones and PCs. “Second Screen” is becoming a popular buzzword in the moment. But what exactly can we expect from that feature? What are the possibilities beyond Call of Duty-Elite-like statistics-porn and profile-management?

TDB: Companion apps will be important in the coming years. The idea that you can still influence the game even when not sitting on the sofa of your living room is really cool and if done well I can’t see any reason for people not to embrace it. It’s not all stats porn.

PS4 will feature eight Gigabytes of GDDR5-RAM, a superhigh-bandwith-memory. Some online-users are geeking out about that fact. But why this excitement? How important is a big and fast memory for making games? And why is eight GB the sweet spot for next gen?

JS: Time will tell. The way PS4 uses GDDR5 allows to use greater bandwidth for demanding tasks. This includes some obvious stuff like graphics using higher resolution textures and more sampling/filtering for effects such as volumetric lighting or realtime global illumination. Also the CUs and the CPU operate on the same memory, so the results can be shared easily between them. This will make it easier to lift some heavy processing tasks from the CPU into the CUs. And for 8gb being the “sweet spot”, I don’t know for sure but I can guess that the cost of memory is one reason. There is however one more variable to consider, and that is loading times. Blu-ray drives haven’t kept up with the speed increase and even hard drives will be stressed trying to keep up. So when it comes to games, the big problem is how to really load the data into the memory to use it effectively. When approaching it from that angle, going from 8gb to 16gb wouldn’t give you that big a boost as going from 4gb to 8gb does.

Reading our questions, you might get an idea of our mindset, looking into next-gen. But are we missing something there? Are we blindly ignoring a fact or feature, that really excites you? Tell us!

TDB: It looks to me that collaborative gaming will become more central in the future, not only in games with a coop mode, but also with things like “remote play” if you need help passing a level or something. It might sound boring to the experienced player but see it that way: if you’re really good at something, what’s stopping you from helping your friends, train them, and make them as good as you? I think that’s cool.


Dec 12th

Housemarque licenses Unreal Engine 4 for unannounced new title

By: Lauri Immonen

Finnish Developer Moves Away from its Proprietary Technology to... Read More

Nov 01st

Arcade is Dead

By: Lauri Immonen

End of an Era - Long Live Arcade, and Now For Something Completely... Read More

Oct 26th

Nex Machina gets physical!

By: Lauri Immonen

It’s a great day for all the Nex Machina aficionados out there who still like... Read More