Gif Maker for Monogame

I was playing around with some graphics code and wanted to share it and was tired of going through the huge convoluted process of recording my screenplay and generating a gif (or using a gif making tool). I wanted something that would be there always. Being that I could not find a solid library somewhere which takes Monogame screen data and outputs a gif, I decided to write one myself!

Monogame.ScreenTools

I created a small library which I’m calling screentools which has a few utilities for creating screenshots and generating gifs. Included in the library is a test project with example usage.

Full Source here
Test App controls: F12 for Gif (hold and release when done), F9 for screenshot, one per keypress.

I intend to improve it as needed as I wrap it back into my game code. The immediate improvements I plan to add are multi-threading and compression. Exploring the problem space made me realize that I could write a more efficient Monogame specific gif maker. For now, it uses .NET’s System.Drawing to bootstrap the process and .NET’s Gif codecs to output. That improvement would be a longer-term one unless I find that someone else has already done it.

This small library was an impulse build so.. expect bugs.

-Jonathan

Misbehaving collision and also ECS

Ah collision you naughty monkey.

On the bright side, Visual Studio 2017 performance monitoring tools are pretty cool – when they work. In this case, the tools highlighted the exact offending line of code where  the CPU is spending a comically high 50% of its time! For once, it’s not premature optimization but I was expecting this. When I hack together game engines in intense one day sessions, I tend to take brute force approaches to every solution in an effort to get something working on screen. Function that’s O(N^4) loping through every object in the game? Whatever, slap a ‘TODO’ on it and make it someone else’s problem. In this case it’s O(N^2) but what’s a power of two between friends. I first came upon the issue when I was running a test level and found an ‘AddRandomEntities()’ command in the console window mapped to F1. Curious, I kept hitting F1 until my game slowed to a craw. I looked at the dat and saw that a mere 600 collidable objects had brought the engine down.  That may seem like a lot but add a bit of bullet hell and monsters on top of more unique objects and that number comes down real quickly.

 

“A poorly coded collision function” – circa 2017

 

Fortunately, this is an easy fix in a 2D game. Subdivision of the world via a quad tree or similar structure. Really, even subdividing the screen into quadrants alone would quadruple performance. For an in-depth tutorial on build quadtrees (along with nice explanation graphics) try this. I know, I should have written something up about it here but apparently my old post was taken down from Stack Overflow for some odd reason.
Anyway, without that offending and naive collision function, the engine can render a few hundred thousand objects at 120 fps.

But while we’re talking…

Let’s talk Entity Component Systems, my new one true love. Sorry inheritance, you had your chance!  Prior to implementing one in my engine, I was hitting bottlenecks when it came to tracking, management and manipulation of traditional ‘heavy objects’. The added overhead of pulling out properties, constantly switching contexts as you attempt to navigate tree after tree of a given object was killing performance and tended to lend itself to bugs.

 

Entity Component System
“A typical ECS” – Gamasutra

 

The switch to ECS, in particular the System part came with a fairly substantial performance leap in addition to large improvements in code flow. Now instead of object hierarchies, each system is managing it’s own list of data only objects objects. Having each system iterate its list in sequence means that our current list of data being operated on is likely always hot in cache.

 

 

Each system can then operate on hundreds of thousands of data containers without a care as to who they belong to.

“They’re Everywhere!” – somedude, a prototype, 2017

For my engine, I wanted to build an ECS as opposed to an EC. There is something that is inherently elegant to me about the separation between Entity (an id), Component (data) and System (logic).
One of the tricky parts of ECS is handling cases where Systems need to operate across component types and also component lookups (which require a cast). For my approach I introduced two optimizations. First, I introduced Nodes, an idea I stole from an implementation I saw around the web. Nodes help to bridge that tiny gap between component and ‘system who needs lots of different data’. A node can hold components but also importantly, holds data which is related across all components held within. For instance here’s a simple Collision Node:
public class CollisionNode : INode
{
        public int Id { get; set; }
        public int CollidedWith { get; set; }
        public bool Checked { get; set; }
        public bool HadCollision => CollisionData.HasCollision;
        public PositionComponent Position { get; set; }
        public CollidableComponent CollisionData { get; set; }
}

The Node allows us to generate metadata about the joined objects in a lightweight way. In this way we avoid excessive lookups and lots of meta collections. We can iterate this one list and have all the info we need to take action.

To get around the casting in general that comes with ECS I opted for an enum based component type property. This isn’t the best solution, I can’t even call it an OK solution but I have yet to run into a significant issue with it. Why cast and check to ensure your casts succeeded when you can bake in the type info and grab entities matching the enumeration type. It works for me, for now.
My ECS implementation is far from the best but I’ve found that it is relatively easy to add new features and have them “just work”. After years of building inheritance hierarchies – I finally get it. If you’re not using an EC or ECS implementation in your engine, you should probably consider it!


-Jonathan

Monogame Networking .. a Decade Later

Today I’ve been investigating options for integrating a multiplayer layer into my Monogame based game engine. When I first opened my browser to take a look, I popped open my bookmarks and saw a series of sites and postings circa 2012-2014 that talked exclusively about Lidgren, Raknet or ‘roll your own’. Or worse yet, there were numerous links to the now dead XNA.Networking API.

Enter Unreliability

After a bit of link purging, I began a new phase of research and stumbled upon the excellent BenchmarkNet project (https://github.com/nxrighthere/BenchmarkNet) which is a testing app for reliable UDP libraries.

Now, I must admit, I’m partial to UDP and reliable UDP in particular. This is a topic that is somewhat controversial but most high-end games are using some variation of TCP/UDP or reliable UDP. Sometimes together. Most ‘roll your own’ systems eventually become reliable UDP. I won’t rehash the arguments – but an excellent post can be found here and discussion here.

In my personal experience, TCP in game dev has given me headaches due to re-transmit issues and lack of packet prioritization. I’ll admit though that every game or project I worked on in the 2000s was fully or majority TCP – including the failed Shadowrun MMO and RunUO (Ultima Online). Times have changed though and reliable UDP is no longer a bad word (or so I hope). So let’s look at some of the primary options…

Let the games begin

Below are the latest results pulled from the 64 connected client test on BenchMarkNet’s github wiki.

As you can see, most of the libraries perform within 10% of each other except for a few particularly bad performances turned in by UNet and Lidren with issues related to memory consumption and CPU utilization respectively.

With the spread so narrow, I began to look at other things that I find important when picking out a library — source code access, license, and features. I won’t go through each one but I ruled out all but two options due to performance, license, lack of access to source, or monetization schemes I was uninterested in.

And the winner…

In the end I noticed that LiteNetLib often had the lowest CPU utilization while Neutrino was often not far behind but with a lower Bandwidth utilization. Better yet, both are OpenSource and MIT licensed! In addition to this, both libraries are exceptioally cross-platform, feature complete, have tight serialization, and work in either client-server or P2P configuration.

 

Ever Present Multiplayer – The Local Server

The approach that I’m leaning towards is the local game server pioneered by Id with Doom and Quake. This server embedded in the client allows you to code the game as if it was multiplayer no matter what while also supporting online gameplay modes. I think this approach would mesh well with the existing Entity Component System  (ECS) by jumping on the same hooks used by the AI for input and rendering. My thinking at the moment is that the new NetworkSystem can create AINodes (or a variant of them) which will represent the other players or the decisions of the AISystem. Either way, their logic remains largely the same and ‘just works’.

If my logic is sound, I can deploy to the xbox with the local server and if/when I get network API access on the xbox, I can point to a remote server and it should ‘just work’.

 

In any case, I’ll post back with my results on this whole networking refactoring!

P.S. Short aside: you might be wondering, what happened to the whole ‘migrating PC Game Engine to UWP’ project? Well, it turns out it was pretty painless. After a few minor changes (i.e., the Window class not having a Position) – I managed to get the engine up and running in under an hour. It turns out all of the planning and anguish I had spent over selecting only cross-platform libraries was worth it. This is a first…

-Jonathan

Serializing Game Settings

Today I began the work of migrating my C# Monogame Game Engine (code named Rogue Squad) from a DirectX/Windows codebase to the Windows 10 Universal Windows Platform.  I expected there to be rather large changes required in the refactoring but thus far I’ve only ran into two. I’ll detail the second minor change and why it matters, at the end.

First I started with a straight forward DataContract to hold the fairly basic settings for the game. The annotations allow the DataContract serializer to easily read/write from file in a type-safe way.

[DataContract]
public class GameSettings : IGameSerializableObject
{
    [DataMember]
    public int GlobalVolume { get; set; }
    [DataMember]
    public int FxVolume { get; set; }
    [DataMember]
    public int MusicVolume { get; set; }
    [DataMember]
    public int SpeechVolume { get; set; }
    [DataMember]
    public int ResolutionH { get; set; }
    [DataMember]
    public int ResolutionW { get; set; }
    [DataMember]
    public bool EnableFullScreen { get; set; }
    [DataMember]
    public bool UseVsync { get; set; }

    public static GameSettings Default =>  new GameSettings{ GlobalVolume=100, FxVolume = 100, MusicVolume=100, SpeechVolume=100, ResolutionH=800, ResolutionW=600, EnableFullScreen = false, UseVsync=false };
}

In the DX/Windows app, the serialization is equally straightforward. We simply create or open the file and stream it in, casting the JSON to our GameSettings.

public class AppSettings
   {
      
       public const string GAME_SETTINGS = "gameSettings.json";
       DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(GameSettings));

       public GameSettings LoadSettings()
       {
           if (!File.Exists(GAME_SETTINGS)) return GameSettings.Default;
           
           using (FileStream stream = new FileStream(GAME_SETTINGS, FileMode.Open))
           {
               return (GameSettings)serializer.ReadObject(stream);
           }
       }

       public void SaveSettings(GameSettings settings)
       {
           using (FileStream stream = new FileStream(GAME_SETTINGS, FileMode.Create))
           {
               serializer.WriteObject(stream, settings);
           }
       }       
       
       
   }

Unfortunately, UWP’s sandboxed environment means that any sort of direct file writes are out of the question. This also applies to asset loading. On the one hand, this API style has been around a little while – having made its splash with the Windows Phone 7 and the initial WinRT iteration of the Microsoft App Store – so most issues should be long since resolved. Our main problem is, the ‘all async all the time’ API design doesn’t quite mesh well with the ‘loop it baby’ noticeably non-async nature of most game APIs. While this is changing, as of the time of this writing Monogame 3.6 does not make much use of async APIs. We can’t really fault it though, it started as a re-implementation of the defunct XNA library for the XBOX 360. While its codebase has evolved to support everything from PS4 to Xbox One and the Nintendo Switch, it’s design is decidedly stuck in late 2009. That’s not necessarily a bad thing. If it ain’t broke..

using Windows.Storage;

public class AppSettings
    {
  
        public const string GAME_SETTINGS = "gameSettings.json";
        DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(GameSettings));
        StorageFolder localFolder;
        public AppSettings()
        {
            localFolder = ApplicationData.Current.LocalFolder;
        }
        public async Task<GameSettings> LoadSettings()
        {
            if (!File.Exists(GAME_SETTINGS)) return GameSettings.Default;
                       
            var file = await localFolder.GetFileAsync(GAME_SETTINGS);
            using (var stream = await file.OpenStreamForReadAsync())
            {
                return (GameSettings)serializer.ReadObject(stream);
            }
        }

        public async Task SaveSettings(GameSettings settings)
        {
            var fileExist = await localFolder.TryGetItemAsync(GAME_SETTINGS);
            if (fileExist == null)
            {
                await localFolder.CreateFileAsync(GAME_SETTINGS);
            }

            var file = await localFolder.GetFileAsync(GAME_SETTINGS);
            using (var stream = await file.OpenStreamForWriteAsync())
            {
                serializer.WriteObject(stream, settings);
            }                        
        }                       
    }

The new version is fairly straightforward and technically “cross-platform” compatible back to Windows 8. The keys changes are the switch to the StorageFolder and StorageFile abstractions as well as the usage of a variety of Async functions.

On the engine side where you’ll eventually consume these settings you’ll either have to tag your methods as async, wrap them in a Task<T>, or call the dreaded .result() method. I was fortunate enough that this code was only being called from the Options Screen in the UI. I was able to mark the event handlers async and called it a day like so..

private async void Back_Resolution_Selected(object sender, PlayerIndexEventArgs e)
{
    //save res settings
    gameSettings.ResolutionH = Engine.Instance.ScreenHeight;
    gameSettings.ResolutionW = Engine.Instance.ScreenWidth;
    await settings.SaveSettings(gameSettings);
   
}

Over the next few weeks I will be posting key challenges and solutions as I continue porting my engine to UWP. Ultimately, the goal is to get everything running on the Xbox One and pick up development from there. It may be a while…

 

Until next time, cheers!

-Jonathan

 

Should you Kickstart your new game idea?

As I began work on a new game idea yesterday (Society of Man), I started thinking about whether it would be a good idea to Kickstart this game or not. You’d think it’d be a simple proposition. You get a change to prove out your game by putting in front of thousands of potential eager customers and seeing if they’ll pay you 20, 30, or 50k upfront to reserve a copy. Where’s the negative?

How big is your game?

Society of Man is a small game. Believe it or not running a Kickstarter can be a job in and of itself (see Guide for Video Game Projects on Kickstarter ). It takes a large amount of planning for things like reward tiers, budget, design and marketing. It requires a good amount of constant interaction with your backers. It also typically requires that you have a working demo, proof, or vertical slice of your concept. If your game isn’t large enough, you just might find that the effort required to execute a successful Kickstarter is actually more than it would take to build and finish most of the game!

Overcommitment and Scoping

Another thing which frequently happens even to the most experienced teams Kickstarting is over-commitment whether through reward tiers or community postings. Often times you’ll feel as though you need to add goals or reward levels for things that would have otherwise been ‘nice-to-haves’. Something as common as ‘support additional platforms’ which is nearly always expected can be a very large stretch and easily lead to a large amount of additional work that you could have otherwise ignored until there was sufficient demand.

TL;DR

You should seriously consider whether your game is large enough to warrant a kickstarter and whether the money your asking for will pay for the large amount of additional effort you’ll have to put forward to really market the kickstarter and game. What would have been small, private 3 month project can easily turn into 2 months of pre-development and marketing followed by 6 months of integrating dozens of features you would have skipped otherwise.