Everything there is to know about Python Flask

As I was researching whether or not I wanted to go forward designing a new website using Python and Flask, I managed to find detailed information, libraries and tutorials on virtually anything you’d want to do. I figured I’d post it here for personal reference and also in-case anyone else happens to bump into it.

Using the data below you can fairly quickly build a full featured website with a database, google/facebook authentication, payment processing, and even internationalization using the very simple Python Flask framework. It can then be hosted in minutes on a very simple/cheap python hosting site like Python Anywhere. (I am not affiliated with them)

Flask is an un-opinionated micro-framework for build web apis, web services, and websites. For my use case, I found the combination of an unobtrusive framework, simple templating via built-in Jinja2 and having access to Python’s massive library, an absolute boon to development time. Lastly, if you’re new to Python, I highly recommend the free (make sure you select Community) version of PyCharm.

Flask Library Conversation












( below is the best tutorial out there, start at part 1 if you’re brand-new)
https://buildasaasappwithflask.com/ (NOT FREE)

Gif Maker for Monogame

I was playing around with some graphics code and wanted to share it and was tired of going through the huge convoluted process of recording my screenplay and generating a gif (or using a gif making tool). I wanted something that would be there always. Being that I could not find a solid library somewhere which takes Monogame screen data and outputs a gif, I decided to write one myself!


I created a small library which I’m calling screentools which has a few utilities for creating screenshots and generating gifs. Included in the library is a test project with example usage.

Full Source here
Test App controls: F12 for Gif (hold and release when done), F9 for screenshot, one per keypress.

I intend to improve it as needed as I wrap it back into my game code. The immediate improvements I plan to add are multi-threading and compression. Exploring the problem space made me realize that I could write a more efficient Monogame specific gif maker. For now, it uses .NET’s System.Drawing to bootstrap the process and .NET’s Gif codecs to output. That improvement would be a longer-term one unless I find that someone else has already done it.

This small library was an impulse build so.. expect bugs.


Misbehaving collision and also ECS

Ah collision you naughty monkey.

On the bright side, Visual Studio 2017 performance monitoring tools are pretty cool – when they work. In this case, the tools highlighted the exact offending line of code where  the CPU is spending a comically high 50% of its time! For once, it’s not premature optimization but I was expecting this. When I hack together game engines in intense one day sessions, I tend to take brute force approaches to every solution in an effort to get something working on screen. Function that’s O(N^4) loping through every object in the game? Whatever, slap a ‘TODO’ on it and make it someone else’s problem. In this case it’s O(N^2) but what’s a power of two between friends. I first came upon the issue when I was running a test level and found an ‘AddRandomEntities()’ command in the console window mapped to F1. Curious, I kept hitting F1 until my game slowed to a craw. I looked at the dat and saw that a mere 600 collidable objects had brought the engine down.  That may seem like a lot but add a bit of bullet hell and monsters on top of more unique objects and that number comes down real quickly.


“A poorly coded collision function” – circa 2017


Fortunately, this is an easy fix in a 2D game. Subdivision of the world via a quad tree or similar structure. Really, even subdividing the screen into quadrants alone would quadruple performance. For an in-depth tutorial on build quadtrees (along with nice explanation graphics) try this. I know, I should have written something up about it here but apparently my old post was taken down from Stack Overflow for some odd reason.
Anyway, without that offending and naive collision function, the engine can render a few hundred thousand objects at 120 fps.

But while we’re talking…

Let’s talk Entity Component Systems, my new one true love. Sorry inheritance, you had your chance!  Prior to implementing one in my engine, I was hitting bottlenecks when it came to tracking, management and manipulation of traditional ‘heavy objects’. The added overhead of pulling out properties, constantly switching contexts as you attempt to navigate tree after tree of a given object was killing performance and tended to lend itself to bugs.


Entity Component System
“A typical ECS” – Gamasutra


The switch to ECS, in particular the System part came with a fairly substantial performance leap in addition to large improvements in code flow. Now instead of object hierarchies, each system is managing it’s own list of data only objects objects. Having each system iterate its list in sequence means that our current list of data being operated on is likely always hot in cache.



Each system can then operate on hundreds of thousands of data containers without a care as to who they belong to.

“They’re Everywhere!” – somedude, a prototype, 2017

For my engine, I wanted to build an ECS as opposed to an EC. There is something that is inherently elegant to me about the separation between Entity (an id), Component (data) and System (logic).
One of the tricky parts of ECS is handling cases where Systems need to operate across component types and also component lookups (which require a cast). For my approach I introduced two optimizations. First, I introduced Nodes, an idea I stole from an implementation I saw around the web. Nodes help to bridge that tiny gap between component and ‘system who needs lots of different data’. A node can hold components but also importantly, holds data which is related across all components held within. For instance here’s a simple Collision Node:
public class CollisionNode : INode
        public int Id { get; set; }
        public int CollidedWith { get; set; }
        public bool Checked { get; set; }
        public bool HadCollision => CollisionData.HasCollision;
        public PositionComponent Position { get; set; }
        public CollidableComponent CollisionData { get; set; }

The Node allows us to generate metadata about the joined objects in a lightweight way. In this way we avoid excessive lookups and lots of meta collections. We can iterate this one list and have all the info we need to take action.

To get around the casting in general that comes with ECS I opted for an enum based component type property. This isn’t the best solution, I can’t even call it an OK solution but I have yet to run into a significant issue with it. Why cast and check to ensure your casts succeeded when you can bake in the type info and grab entities matching the enumeration type. It works for me, for now.
My ECS implementation is far from the best but I’ve found that it is relatively easy to add new features and have them “just work”. After years of building inheritance hierarchies – I finally get it. If you’re not using an EC or ECS implementation in your engine, you should probably consider it!


Monogame Networking .. a Decade Later

Today I’ve been investigating options for integrating a multiplayer layer into my Monogame based game engine. When I first opened my browser to take a look, I popped open my bookmarks and saw a series of sites and postings circa 2012-2014 that talked exclusively about Lidgren, Raknet or ‘roll your own’. Or worse yet, there were numerous links to the now dead XNA.Networking API.

Enter Unreliability

After a bit of link purging, I began a new phase of research and stumbled upon the excellent BenchmarkNet project (https://github.com/nxrighthere/BenchmarkNet) which is a testing app for reliable UDP libraries.

Now, I must admit, I’m partial to UDP and reliable UDP in particular. This is a topic that is somewhat controversial but most high-end games are using some variation of TCP/UDP or reliable UDP. Sometimes together. Most ‘roll your own’ systems eventually become reliable UDP. I won’t rehash the arguments – but an excellent post can be found here and discussion here.

In my personal experience, TCP in game dev has given me headaches due to re-transmit issues and lack of packet prioritization. I’ll admit though that every game or project I worked on in the 2000s was fully or majority TCP – including the failed Shadowrun MMO and RunUO (Ultima Online). Times have changed though and reliable UDP is no longer a bad word (or so I hope). So let’s look at some of the primary options…

Let the games begin

Below are the latest results pulled from the 64 connected client test on BenchMarkNet’s github wiki.

As you can see, most of the libraries perform within 10% of each other except for a few particularly bad performances turned in by UNet and Lidren with issues related to memory consumption and CPU utilization respectively.

With the spread so narrow, I began to look at other things that I find important when picking out a library — source code access, license, and features. I won’t go through each one but I ruled out all but two options due to performance, license, lack of access to source, or monetization schemes I was uninterested in.

And the winner…

In the end I noticed that LiteNetLib often had the lowest CPU utilization while Neutrino was often not far behind but with a lower Bandwidth utilization. Better yet, both are OpenSource and MIT licensed! In addition to this, both libraries are exceptioally cross-platform, feature complete, have tight serialization, and work in either client-server or P2P configuration.


Ever Present Multiplayer – The Local Server

The approach that I’m leaning towards is the local game server pioneered by Id with Doom and Quake. This server embedded in the client allows you to code the game as if it was multiplayer no matter what while also supporting online gameplay modes. I think this approach would mesh well with the existing Entity Component System  (ECS) by jumping on the same hooks used by the AI for input and rendering. My thinking at the moment is that the new NetworkSystem can create AINodes (or a variant of them) which will represent the other players or the decisions of the AISystem. Either way, their logic remains largely the same and ‘just works’.

If my logic is sound, I can deploy to the xbox with the local server and if/when I get network API access on the xbox, I can point to a remote server and it should ‘just work’.


In any case, I’ll post back with my results on this whole networking refactoring!

P.S. Short aside: you might be wondering, what happened to the whole ‘migrating PC Game Engine to UWP’ project? Well, it turns out it was pretty painless. After a few minor changes (i.e., the Window class not having a Position) – I managed to get the engine up and running in under an hour. It turns out all of the planning and anguish I had spent over selecting only cross-platform libraries was worth it. This is a first…


Serializing Game Settings

Today I began the work of migrating my C# Monogame Game Engine (code named Rogue Squad) from a DirectX/Windows codebase to the Windows 10 Universal Windows Platform.  I expected there to be rather large changes required in the refactoring but thus far I’ve only ran into two. I’ll detail the second minor change and why it matters, at the end.

First I started with a straight forward DataContract to hold the fairly basic settings for the game. The annotations allow the DataContract serializer to easily read/write from file in a type-safe way.

public class GameSettings : IGameSerializableObject
    public int GlobalVolume { get; set; }
    public int FxVolume { get; set; }
    public int MusicVolume { get; set; }
    public int SpeechVolume { get; set; }
    public int ResolutionH { get; set; }
    public int ResolutionW { get; set; }
    public bool EnableFullScreen { get; set; }
    public bool UseVsync { get; set; }

    public static GameSettings Default =>  new GameSettings{ GlobalVolume=100, FxVolume = 100, MusicVolume=100, SpeechVolume=100, ResolutionH=800, ResolutionW=600, EnableFullScreen = false, UseVsync=false };

In the DX/Windows app, the serialization is equally straightforward. We simply create or open the file and stream it in, casting the JSON to our GameSettings.

public class AppSettings
       public const string GAME_SETTINGS = "gameSettings.json";
       DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(GameSettings));

       public GameSettings LoadSettings()
           if (!File.Exists(GAME_SETTINGS)) return GameSettings.Default;
           using (FileStream stream = new FileStream(GAME_SETTINGS, FileMode.Open))
               return (GameSettings)serializer.ReadObject(stream);

       public void SaveSettings(GameSettings settings)
           using (FileStream stream = new FileStream(GAME_SETTINGS, FileMode.Create))
               serializer.WriteObject(stream, settings);

Unfortunately, UWP’s sandboxed environment means that any sort of direct file writes are out of the question. This also applies to asset loading. On the one hand, this API style has been around a little while – having made its splash with the Windows Phone 7 and the initial WinRT iteration of the Microsoft App Store – so most issues should be long since resolved. Our main problem is, the ‘all async all the time’ API design doesn’t quite mesh well with the ‘loop it baby’ noticeably non-async nature of most game APIs. While this is changing, as of the time of this writing Monogame 3.6 does not make much use of async APIs. We can’t really fault it though, it started as a re-implementation of the defunct XNA library for the XBOX 360. While its codebase has evolved to support everything from PS4 to Xbox One and the Nintendo Switch, it’s design is decidedly stuck in late 2009. That’s not necessarily a bad thing. If it ain’t broke..

using Windows.Storage;

public class AppSettings
        public const string GAME_SETTINGS = "gameSettings.json";
        DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(GameSettings));
        StorageFolder localFolder;
        public AppSettings()
            localFolder = ApplicationData.Current.LocalFolder;
        public async Task<GameSettings> LoadSettings()
            if (!File.Exists(GAME_SETTINGS)) return GameSettings.Default;
            var file = await localFolder.GetFileAsync(GAME_SETTINGS);
            using (var stream = await file.OpenStreamForReadAsync())
                return (GameSettings)serializer.ReadObject(stream);

        public async Task SaveSettings(GameSettings settings)
            var fileExist = await localFolder.TryGetItemAsync(GAME_SETTINGS);
            if (fileExist == null)
                await localFolder.CreateFileAsync(GAME_SETTINGS);

            var file = await localFolder.GetFileAsync(GAME_SETTINGS);
            using (var stream = await file.OpenStreamForWriteAsync())
                serializer.WriteObject(stream, settings);

The new version is fairly straightforward and technically “cross-platform” compatible back to Windows 8. The keys changes are the switch to the StorageFolder and StorageFile abstractions as well as the usage of a variety of Async functions.

On the engine side where you’ll eventually consume these settings you’ll either have to tag your methods as async, wrap them in a Task<T>, or call the dreaded .result() method. I was fortunate enough that this code was only being called from the Options Screen in the UI. I was able to mark the event handlers async and called it a day like so..

private async void Back_Resolution_Selected(object sender, PlayerIndexEventArgs e)
    //save res settings
    gameSettings.ResolutionH = Engine.Instance.ScreenHeight;
    gameSettings.ResolutionW = Engine.Instance.ScreenWidth;
    await settings.SaveSettings(gameSettings);

Over the next few weeks I will be posting key challenges and solutions as I continue porting my engine to UWP. Ultimately, the goal is to get everything running on the Xbox One and pick up development from there. It may be a while…


Until next time, cheers!



Should you Kickstart your new game idea?

As I began work on a new game idea yesterday (Society of Man), I started thinking about whether it would be a good idea to Kickstart this game or not. You’d think it’d be a simple proposition. You get a change to prove out your game by putting in front of thousands of potential eager customers and seeing if they’ll pay you 20, 30, or 50k upfront to reserve a copy. Where’s the negative?

How big is your game?

Society of Man is a small game. Believe it or not running a Kickstarter can be a job in and of itself (see Guide for Video Game Projects on Kickstarter ). It takes a large amount of planning for things like reward tiers, budget, design and marketing. It requires a good amount of constant interaction with your backers. It also typically requires that you have a working demo, proof, or vertical slice of your concept. If your game isn’t large enough, you just might find that the effort required to execute a successful Kickstarter is actually more than it would take to build and finish most of the game!

Overcommitment and Scoping

Another thing which frequently happens even to the most experienced teams Kickstarting is over-commitment whether through reward tiers or community postings. Often times you’ll feel as though you need to add goals or reward levels for things that would have otherwise been ‘nice-to-haves’. Something as common as ‘support additional platforms’ which is nearly always expected can be a very large stretch and easily lead to a large amount of additional work that you could have otherwise ignored until there was sufficient demand.


You should seriously consider whether your game is large enough to warrant a kickstarter and whether the money your asking for will pay for the large amount of additional effort you’ll have to put forward to really market the kickstarter and game. What would have been small, private 3 month project can easily turn into 2 months of pre-development and marketing followed by 6 months of integrating dozens of features you would have skipped otherwise.

Ludum Dare 39!

Tomorrow is the start of Ludum dare 39 and I am super excited to get started! For the past few months I’ve been hard at work on a few different web development projects but my a game idea for a remake of Sega’s Shadowrun RPG has been on the back of my mind.

When I was in high school, back in the early 90s, I wrote a game called Digital Reset, which was a cyberpunk RPG. Last month I started laying down plans for a remake of the RPG. At the time when I wrote it, it had to be technically simple (28kb). The game was largely text and menu driven with graphics for each location and occasionally when encountering a new story element. Given the massive leap forward in technology, I’m hoping I can remake the game (or some semblance of it) along the lines of Shadowrun, in 48hrs. Before you laugh, realize that I wrote the original in an api where you had to command individual pixels on and off in order to draw! Who know’s, it’ll probably end up being a blank map with a stickman walking around but it’ll be fun either way.

I plan to knock out a prototype using Monogame and the excellent Monogame.Extended library. Having used C# for the better part of 15 years in Software Dev, I’ve always found that I’m much faster knocking out code in C#/Monogame than any other platform (sniff, one day I will get around to seriously trying out Love2D ). You may ask, why not use Unity 3d? My answer is simple, I know Monogame. I wrote my first XNA game on day one when it was released for the XBOX 360. Writing simple engine code comes naturally and it’s what I’ve always done while I was heading up the Baltimore Indie Game Developers Group. I think I was the only non-unity developer come to think of it…

In any case, I look forward to seeing what my rusty code fingers can accomplish in 48hrs. I will be posting my code on GitHub and also linking here.

Happy Coding.

3 Things I wish I knew about F# before I started that big project

Hey everyone, today I wanted to share some insights I’ve gained while learning F# over the past few weeks/months/years (it’s been an on-again, off-again relationship). I love F# but coming from a land of large C# projects, there are some edge cases where you may find yourself tripped up if you’re relatively new to F#.

Bitshift enumerations are not supported

Coming from C#, more than a few of my enumerations followed the pattern of:

enum myEnum = 
   a = 1 << 0,
   b = 1 << 1,
   c = 1 << 2,
   d = 1 << 3

Unfortunately, using bitshift operators within a union/enumeration declaration in F# is not supported. You can however accomplish the same thing using a manually generated bit field as shown below. Easy, but slightly less maintainable.

Type myEnum = 
   a = 0b00001
   b = 0b00010
   c = 0b00100
   d = 0b01000

Null Refs will still plague you

One of the things that quickly becomes apparent when integrating your F# app into a C# ecosystem is that while F# does have a native NULL type, it will still crash spectacularly when dealing with null C# types. Consider the following:

match SomeObj.Prop with  //nullref
   | condition A -> ...
   | condition B -> ... 

Solution? Wrap your questionable calls to C# objects in a ‘toOption’ call

let toOption = function
   | null -> None
   | object -> Some

then you can use it like so..

let myVal = toOption Someobj.Prop
match myVal with
    | Some -> ...
    | None -> ...

Presto! No more null-ref worries in your elegant F# code. Also, you avoid the need for constant “if x <> null then …”.


One of the great things about F# is the ease of record creation. Dreams of serializing these records and sending them across the wire to your web apis can quickly be shattered when you notice all your JSON objects serialized with @suffixes. What gives? F# record members are treated like C# Fields. Their underlying fieldname is serialized by most default .NET serializers resulting in something like the following..

type myRecord {
   Name : string;
   Age:  int;
   Exp:  int;

being serialized as:

{ "Name@":"Gandolf", "Age@":"225", "Exp@":"15" }

Fortunately,there’s JSON.NET to the rescue! No, you won’t have to re-write your serialization logic or anything. Simply add the following attributes to your type and you’re done! The rest will be handled auto-magically.


That’s all for now folks but stop by in the future as I slowly begin to wake from my stupor and flesh out this blog/site.


So you’re an IT guy who wants to be a developer eh?

A few days ago I was searching for an initial topic to blog about, when an old friend asked me for some pointers on moving from IT to a developer role. I figured this was as good a starting point as any!

First let’s start with the secret the recruiters don’t want you to know – you don’t need to settle for an entry level position nor significant pay cut! You’ve been in IT for a while right? 2yrs? 5yrs? 10yrs? more? I made the move back in 2001 with barely 1 year of ad-hoc on-the-job development experience and went on to become a Lead Developer within 2 years! I can’t guaranteed that everyone will find it this easy, but if you’re passionate and willing to follow a few easy steps, I think you can do it!

So what are these steps? There’s three actually. I call them Immersion, Practice, and Refinement. I’ve used these three steps to do everything from Painting to Music Making to Rock Climbing. I’d say they’re more like “stages” than steps, and you’ll find yourself iterating through them repeatedly as you master a given role or skill.


It all begins with immersion. If you’re looking to change roles, you need to immerse yourself in that new role in order to get a guaranteed drip feed of critical industry lingo and terminology. Now bear with me, you’re probably thinking “gee that’s obvious”. This is all part of avoiding the “car crash interview” and it’s part of surveying the lands and getting an idea of what are the unknown-unknowns. The great part is, you are probably already performing this step without even thinking about it. If so, congrats! You’re already 1 for 3! The key to immersion is to become a part of the environments which developers do. Hang out on industry blogs, find a local meetup or free conference, lurk in a development Slack channel, hangout on the Stack Exchange Software Engineering Channel, or even scan through Reddit’s many developer forums. You want to become familiar and used to the common terminology you’re going to initially encounter. Learning the lingo, even at a surface level, helps to improve your confidence and prime your mind for better understanding when you are ready to engage those topics fully in the future. If I were to say “what sort of interfaces and design patterns might you choose to construct a library book management app”, you don’t want to be looking around like a kid lost in the woods. After immersing yourself it’ll be impossible to not have at least heard these terms several times in passing. You might even know that ‘interfaces are how we define contracts in software’ and ‘design patterns are re-usable software designs for common problems’. You might not be able to answer questions for a “Lead programmer” role, but you’ll know enough to say* “My experience with design patterns is limited but I’d….”*. The key is to get a better understanding of what you do and do not know. Once this becomes clear, you can approach both learning and interviews in a cool an honest way.


Next comes practice. This stage is simple but critical. It’s time to build an app! You’ll want to set yourself up with a fresh GitHub account and create a free, personal code repository.  Think of a task that you perform on your computer, or a common automation task that you perform by other means and let that be the starting point for your initial project. My first serious app was a network scanner which used multi-threading to reach out, ping, and record the responses on a given class C network. Simple. If that sounds difficult, don’t worry,  in a few short weeks of practice and immersion, you’ll be able to write that same app — even if it isn’t perhaps the cleanest and most object oriented code ever.

Having fresh code in your GitHub accomplishes several things. First, it proves you’re serious about wanting to be a developer. Second, it shows that you have at least a basic understanding of the most popular code repository on the planet. Third, it will be your own personal motivation. Ask any coder and they will tell you that starting a new project can be tricky but once it gets going, you’ll end up coding into the wee hours of the morning as you think of new features and ways to optimize, refactor and improve your code. Once that happens (if it hasn’t already), you will have ignited an insatiable desire to develop ever more performant and elegant code — the learning will come as a byproduct.


The last stage is what I like to call refinement and I’ll tell you right now, it goes on indefinitely. This is the constant process by which you’ll take the things you’ve learned while immersing yourself and practicing, and dig deeper. “This SOLID acronym seems to be mentioned a lot, what’s that”? These are the questions you’ll start digging into. Now it’s time to open a book or browser, and learn more about concepts that you may only know by name. If a concept still seems ‘clear as mud’ after reading about it, lay it down, and loop back around to another concept. It takes years to gain full and deep master of all the myriad concepts associated with software development. That being said, within a few weeks of immersion, refinement and practice, you’ll have the skills to talk smartly and demonstrate your knowledge of common software engineering concepts!

Good luck!