← Back to context

Comment by perching_aix

7 days ago

> In addition, many titles are designed from the ground-up to be online-only; in effect, these proposals would curtail developer choice by making these video games prohibitively expensive to create.

How? Can't wait to hear them substantiating this tidbit, because from a regular enterprise operations viewpoint this does NOT pass the smell test.

When I found out that Booking[.]com of all companies is moving major traffic, I started to look at what companies are even buying or selling anymore. I clearly had no idea.

In the following paper, CPs refer to content providers, as defined in the paper.

https://estcarisimo.github.io/assets/pdf/papers/2019-comnets... [pdf]

(more at https://estcarisimo.github.io/publications/ )

canonical link for above paper, which is the lead researcher's GH from what I can tell:

https://www.sciencedirect.com/science/article/abs/pii/S01403... ( https://doi.org/10.1016/j.comcom.2019.05.022 )

> Studying the Evolution of Content Providers in IPv4 and IPv6 Internet Cores

> Esteban Carisimo, Carlos Selmo, J. Ignacio Alvarez-Hamelin, Amogh Dhamdhere

[I have edited out some hyphens that made this really hard to read but were helpful due to the layout of the original document as typeset. If that bothers you, I'm sorry in advance. Links are included above.]

> Our goal is to investigate what role CPs now play in the Internet ecosystem, and in particular, if CPs are now a part of the “core” of the Internet. Specifically, we motivate this work with the following questions: How can we identify if a CP does or does not belong to the core of the Internet? If the core of the network does indeed include CPs, who are they?As the overall adoption of IPv6 has been slow, do we notice that delay on IPv4 and IPv6 core evolution? As the AS ecosystem has shown striking differences according to geographical regions [15], do we also see geographical differences in the role of CPs and their presence in the “core” of regional Internet structures? Finally, as more CPs deploy their private CDNs, can we detect “up and coming” CDNs that are not currently in the core of the network but are likely to be in the future?

> We use the concept of k-cores to analyze the structure of the IPv4 AS-level internetwork over the last two decades. We first focus on seven large CPs, and confirm that they are all currently in the core of the Internet. We then dig deeper into the evolution of these large players to correlate observed topological characteristics with documented business practices which can explain when and why these networks entered the core. Next, we repeat the methodology but using IPv6 dataset to compare and contrast the evolution of CPs in both networks. Based on results, we investigate commercial and technical reasons why CPs started to roll out IPv6 connectivity.

> We then take a broader view, characterizing the set of ASes in the core of the IPv4 Internet in terms of business type and geography. Our analysis reveals that an increasing number of CPs are now in the core of the Internet. Finally, we demonstrate that the k-core analysis has the potential to reveal the rise of “up and coming” CPs. To encourage reproducibility of our results, we make our datasets available via an interactive query system at https://cnet.fi.uba.ar/TMA2018/

[…]

> Finally, we study the core evolution of nine other remarkable CPs that belong to the TOPcore but were not included in the Big Seven. Seven of the nine selected ASes are the remaining ASes in Bottger et al.’s [47] TOP15 list, except Hurricane Electric (AS6939) which we do not consider as a CP since it is labeled as Transit/Access in CAIDA’s AS classification [80]. These seven ASes are OVH (AS16276), LimeLight (AS22822), Microsoft (AS8075), Twitter (AS13414), Twitch (AS46489), CloudFlare (AS13335) and EdgeCast (AS15133). The other two ASes are Booking.com (AS43996) and Spotify (AS8403). Interestingly, Booking.com or Spotify are not normally considered among the top CPs, however, they are in both TOPcores.

  • What else would these companies have to gain by making their games online only? Perhaps game developers even have contractual obligations to uphold, or incintives to include third party network interactions. The presence of Twitch, Cloudflare, and Microsoft on this list are interesting, because Microsoft drives a lot of threat intel and also makes a popular OS among gamers. If you want to reduce network traffic and reduce your reliance on third parties and internet access, migrating from Windows and using Proton on Linux would probably be a step in the right direction for many games that you would want to play single player.

Imagine you're an indie game studio developing an MMORPG, both your server and client is likely under constant development and you may only have one or two actual production servers running your server code.

Now this proposal requires you to also continually release your server code.[1] While adding documentation, support for different systems, while ensuring safety as the server can now be reverse engineered and while possibly being liable to abuse created through those servers. Even though your game (and its clients) aren't tailored to working on any server other than the official one anyway.

At least that's my understanding of the issue.

This proposal is obviously aimed at big publishers like EA and Ubisoft, but it hurts small developers. I argue we should just stop playing EA and Ubisoft games, who are the only ones who continue to pull this crap.

[1]: As TheFreim pointed out, this isn't necessarily required. But the server program has to be released when the official servers are shut down. Which means this possibility has to be prepared for throughout development.

  • > Imagine you're an indie game studio developing an MMORPG

    To my understanding, this wouldn't affect MMORPGs where you're explicitly buying X months of access (so long as you do get the access you paid for, or a refund if it's shut down early) which is how most I'm aware of work.

    > Now this proposal requires you to also continually release your server code.[1] While adding documentation, support for different systems,

    The proposal requires leaving the game in a reasonably playable state, but not any specific actions like these. In fact the FAQ specifically says "we're not demanding all internal code and documentation".

    > while ensuring safety as the server can now be reverse engineered and while possibly being liable to abuse created through those servers

    I don't see why the company would be liable for this. Moderation of the private servers would be up to those running the private servers. If there is something to this effect in EU law that I'm unaware of, it seems like it'd already be placing undue burden on games that do currently (or want to) release their server software and that this initiative would be a good opportunity to exempt them from that liability.

    > but it hurts small developers

    If anything I'd speculate small developers are likely to have less issue releasing server software/code, and more likely to have a game this doesn't even apply to in the first place, giving them an edge over larger publishers.

    But even if it were a significant burden, I feel it's really just providing what was already purchased. At the extreme, do you think it'd be okay to take $70 from someone for a singleplayer game, then shut down authentication servers (rendering it unplayable) a few minutes later?

  • > Now this proposal requires you to also continually release your server code.

    This is not accurate. From the FAQ:

    > Q: Won't this consumer action result in the end of "live service" games?

    > A: No, the market demand and profitability of these games means the video games industry has an ongoing interest in selling these. Since our proposals do not interfere with existing business models, these types of games can remain just as profitable, ensuring their survival. The only difference is future ones will need to be designed with an "end of life" build once support finally ends.

    I suggest reading the proposal or /at least/ the FAQ page: https://www.stopkillinggames.com/faq

    • I was actually reading the FAQ just now.

      From my understanding, a company does not have to release a private server alongside the client while the official servers are live, what I said previously was inaccurate. But when the official servers are closed, they are required to provide them.

      However, I don't see how a bankrupt studio can release their server code when they don't have enough money to keep their servers running. An MMORPG shutting down it's servers may not even have any developers left. It may also not have any players left.

      The FAQ suggests that this won't burden developnent at all, but I believe that it will.

      Regardless if they continuously release their server code or not, they still need to develop an "end of life" plan which means having the server code ready to release when they want to kill their servers.

      I think one of the most relevant part of the FAQ is:

      Q: Isn't it impractical, if not impossible to make online-only multiplayer games work without company servers.

      A: Not at all. The majority of online multiplayer games in the past functioned without any company servers and were conducted by the customers privately hosting servers themselves and connecting to each other. Games that were designed this way are all still playable today. As to the practicality, this can vary significantly. If a company has designed a game with no thought given towards the possibility of letting users run the game without their support, then yes, this can be a challenging goal to transition to. If a game has been designed with that as an eventual requirement, then this process can be trivial and relatively simple to implement. Another way to look at this is it could be problematic for some games of today, but there is no reason it needs to be for games of the future.

      I too want online games to be killed responsibly, but I don't think Stop Killing Games is being honest about how this will influence small budget game development as opposed to the big publishers they keep talking about.

      3 replies →

  • > But the server program has to be released when the official servers are shut down. Which means this possibility has to be prepared for throughout development.

    ... which is why it doesn't pass my smell test.

    Say you're working on either a monolithic game server codebase, or just a microservice that's a part of a larger service mesh fulfilling that role. Are you writing any tests? You probably (hopefully) do. So where's that code gonna run the first time before it's even pushed up to version control? Locally. So some extents of it definitely have to run locally, or if you have good test coverage, all of it.

    But okay, let's go a layer further. Say you're trying to go into production with this. As the saying goes, everyone has a production environment, but the lucky folks "even" have others. This sarcastically implies that you need to be able to deploy your solution into multiple environments. And you don't want to be doing this manually, because then e.g. you have no CI/CD, and thus no automated testing on code push. That's not even considering multi-geo stuff, because for multiplayer games I imagine latency matters, so you really want to deploy either to the edge or close to it, and will definitely want to be all around the world, at least in a few key places.

    So you can test locally, and can deploy automatically. Tell me, what is the hold up then? It would take me approximately one entire minute to give you the binaries for anything I ever touch, because if I couldn't do that, the automation wouldn't be able to do so either. At some point, the bullshit has to end, and that's at operations. Not much docs to write either: if your stuff does anything super super custom, you're doing something very wrong. And respectfully, if the aforementioned do not apply to you, you shouldn't be operating any online service at scale in production for anyone in 2025.

    Really the only technical wrenches you can throw into this that I can think of are licensing and dependencies. Neither of these are reasonable spots to be in from an economical or a technical standpoint. Like what, you can't mock other services? How are you testing your stuff then? Can't change suppliers / providers? How is that reasonable from a business agility standpoint?

    So clearly if there is a salient technical rationale for this, it's going to have to be a very sharp departure from anything I've ever experienced in non-gaming enterprise, or my common sense.

    Regarding all the other points (and this will read dismissive because I've already rambled on way too long and I'm trying to keep it short, I genuinely don't mean it like that):

    - if you're writing an MMORPG as a small up-and-coming indie, you're definitely going bankrupt

    - if you're writing an MMORPG, I'm pretty sure you'll have more than just one or two servers running, or there's nothing massive about that multiplayer online role playing game after all

    - it does not require you to continually release anything

    - it does not request you to release documentation (what is there to "document" btw? I'm certainly not imagining too much)

    - it does not request you to support different systems

    - it does not request you to release anything before EOS, thus, security concerns for the official client are null and void - and even if it wasn't (e.g. sequels), security by obscurity is not a reasonable security story anyways

    - the dangerous parts of the reverse engineering efforts still routinely happen without access to server binaries anyways (see all COD games and their players getting hacked to pieces right as we type away)

    - possibly liable is not liable, and I trust you're not a lawyer, just like I'm not

    - it's just a client-server setup like any other - remember, other environments must be possible to connect to as well, if nothing else then for testing

    All of this is completely ignoring how we had dedicated servers and competition events with private setups since forever.

    I legitimately cannot imagine that you can cock up an online service architecture and codebase bad enough, that a team of devs and devops/SREs/ops, or even just a few of those dudes, couldn't get something mostly operational out the door in a few day(!) hackathon at most. Even without planning for all this. And how this would skyrocket the costs especially mystifies me. Surely asset development, staffing, operational costs and marketing are the cost drivers here? How would you surpass ALL or even ANY of that? Just doesn't make sense!