Artifact Snap and AppImage for Linux

Some time ago I added an AppImage for my old game Artifact, as an easy way of installation on Linux. AppImages have a neat architecture, and hopefully will keep Artifact running for a long time on a lot of systems. AppImages can also be run sandboxed using Firejail if you have little trust in me or the source of the AppImage.

Artifact running as a Snap

Now, AppImages do not really have a nice centralized location where applications can be discovered. There are some initiatives, but it feels a bit crude and lacking some polish.

Snaps and their central registry on the other hand feel way more polished. If anything the content in the store often feels unpolished compared to the store. Snaps also run sandboxed by default. While this is good, it sometimes causes unexpected issues, and can be opted out of.

Making a Snap of a LWJGL 2 application was a bit of a headache, but it worked out in the end. My experience with the Snap documentation in general was good, and most of the work was a breeze after I got the game running as a Snap.

Now, go play some Artifact on Linux!

Less light pollution; better photos.

So I finally got to try to take astro photographies from somewhere with less light pollution. I still have trouble with movement from either my tracking lagging a bit, or me not having a remote control for the camera (so I cause movement in the telescope when I start the shot).

M42 – Orion Nebula again

M42 is so easy to find observe and shoot. I also finally got to see some of the Flame Nebula, by doing a 30 seconds exposure into what looked like nothing at all. It sadly did not turn out very well because of movement in the camera, but I finally saw something! Anyway, here is my M42 shot. Getting better at this!

M42 20s exposure with pretty much no editing. Increased contrast a bit and moved the black point.

M31 – Andromeda Galaxy

The moon would easily fit in this picture, but the Andromeda galaxy takes more space, so this is mostly the core and some of the arms. I think I could fit most of the galaxy if I had rotated the camera. I randomly also caught M32 (barely visible at the left edge) and M110 (in the lower right corner). Looking forward to try Andromeda with an even longer exposure, or many stacked images.

M31 30s exposure, and some editing (contrast, black point)

Astro log – Through the smog and light pollution

A month back I finally got a new stepper motor and got the tracking for my telescope working. One dark night I took it to a pretty dark spot close to Bergen and did some observations.

M42 – Orion Nebula

I have observed the Orion nebula under bad light pollution before, but this time I got to observe it with under better conditions. It was stunning. Very sharp.

The next day I took my first deep space photography ever from our apartment. The light pollution was really bad, and there was some smog as well. I also forgot that I could use a timed shot. So i think some blurring is due to the camera moving slightly after I started the exposure.

I think it turned out fine for a first:

M42 taken with a 5 – 10 seconds exposure. Very heavy light pollution. The left image is the raw image, the right ones are versions where i increased contrast and tried to remove the pollution. The latter is an attempt to make it look somewhat like what a visual observation of M42 looks like in my telescope (it is sharper when visually observed)

M1 – Crab Nebula

My original plan for the trip, was to observe M42 and the Andromeda Galaxy. I was also hoping to get to see the Flame Nebula since it was really dark. I sadly did not see any trace of the Flame Nebula so I started looking for some open clusters to look at in Taurus. While scanning for them I suddenly saw that the Crab Nebula was close, and I found it immediately. It is the first supernova remnant I have observed, and I think I saw some small amount of detail. Hoping to get a picture of it one of these days.

M31 and M110 – Andromeda Galaxy and a friend

Andromeda is a not that interesting to visually observe since it is so hard to see anything beyond the core. I think I saw some more since it was really dark, but it was very faint. These two are prime targets for a photo some day, since that should bring out some more detail.

An injection induced horror story

TLDR; Never use field Injection in Spring, scope APIs well, and do not mock value objects.

As this story starts, I was the system responsible for an old application which was being replaced. The replacing application was made by an external team (largely outsourced and supposedly seniors), but once in production, the new system would become part of the responsibility of the maintenance team I was on.

Beginning this transition, I started looking at the new system. My initial impression was good. The database schema was cleaned up compared to the old system. The project seemed more organized. It had a DB migrations system set up. A big improvement compared to what it was replacing.

To get more accustomed to new application, I started looking at simple bugs. My first bug was a simple mapping error towards an external system. I quickly found the offending code, and changed the implementation. My change should have broken some tests, but instead of assertion failures I kept getting weird NPEs from data that could not possibly be null.

Testing the tests

Weirded out, I started looking into the test suite. The tests all looked something like this:


class MapperTest {

   @InjectMocks
   Mapper mapper;
   
   @Mock
   SubMapper mockedSubMapper;
   @Mock
   SubMapper2 mockedSubMapper2;
   
   @Mock
   NuValue next;
   
   @Mock
   Value value;
   public void testMock() {
         when(value.getFoo()).thenReturn("Foo");
         when(value.getBar()).thenReturn("Bar");
         when(mockedSubMapper.map(any())).thenReturn(next);
         when(mockedSubMapper2.map(any())).thenReturn(next);

         NuValue nu = mapper.mapMaiStuff(value);
         assertNonNull(nu); 
   }

}

What does this do:

  • @InjectMocks will when hit by the MockitoRunner, instantiate the @InjectMocks annotated class, grab the relevant @Mock classes and add them to the @InjectMocks annotated one.
  • All the when().thenReturn() then create the mock behavior as well as build return values. Essential re implementing or ignoring most of the behavior that should be under test.

Even this simplified example is hard to read, but the tests I was dealing with, had when() mock statements that went on and on, and there were from 2-10 mocked dependencies, and several levels of mocked value objects.

What does this have to do with dependency injection and Spring? Is it not excessive mocking that is the problem?

The responsibility for this mess lies with the developers, but I think it is interesting to try to understand how they ended up in this dark alley. I think mocking is a symptom and not a cause here.

Much of the cause in my opinion lies with Spring. Spring dependency injection can be done by constructor, setter, or field injection. The classes I had trouble with all used field injection. They looked a lot like this.

@Component
class Mapper {

   @Inject
   private SubMapper subMapper;
   
   @Inject
   private SubMapper1 subMapper1;

   public NuValue map(AllTheData data)
       String v1 = subMapper.map(data);
       String v2 = subMapper1.map(data);
       return new NuValue(v1,v2);
   }

}

The mocking was done because the class itself can not be properly constructed without some magic. The only way to test here is to either fire up Spring (slow) or use something to reach in and set the dependencies. One way is to use Mockito @InjectMocks.

While this partly explains what happened, it does not explain why the value objects were mocked.

I think the value objects were mocked because some of these methods take very large value objects as input. The code under test only reads a very small part of the huge object. This means the test now needs to create that huge value object, which is tedious. So someone came up with mocking the value objects, and rewiring them using Mockito. (This essentially means the value object has been re implemented).

Both these practices seemed to have been adopted as convention. The result was:

  • Tests that were clinging to the existing class structure, but that did not test anything useful. Mostly they tested themselves and Mockito. They tried to test some useful cases though, so they could not just be deleted.
  • Extremely low development velocity. Every change to one of these classes caused a chain reaction of tests that had to be either hacked so that they compiled, or fully rewritten. Fully rewriting a test often caused other chain reactions of changes that were really hard to manage.
  • It took close to half a year to get most of this code in an halfway acceptable shape with decent test coverage.

Most of these classes were refactored to pure functions. The classes were mostly transforming data from one representation to another. There was no need to involve Spring at all.

For some reason whenever I see Spring projects, they all manage way too much using Spring. My hypothesis is that once an annotation based DI framework is in use, using it becomes an alluring path of least resistance. It is so easy to copy an existing class with @Component, and hack from there, instead of thinking about the design.

Closing thoughts

This and other experience has fueled a strong skepticism against annotation based dependency injection frameworks in me. I have found that they are not needed most of the time. They can be used responsibly, but are they worth it? In my experience they create just as many problems as they solve.

To be fair, Spring seems to not use field injection in their documentation anymore. That said, Spring does define what can be done. Field injection at best saves 3-5 lines of code for each dependency, and it is never the right thing to do. Spring should just crash at startup if it detects field injection, that would be sane.

As far as Mockito goes, I think it was mostly innocent in this story, though I do think @InjectMocks should log warnings when it has to break the class contract to add mocks. During all of this, I also had the displeasure to learn what Mockito.RETURNS_DEEP_STUBS does. The documentation has this gem though (A a redeeming factor in my eyes.):

WARNING: This feature should rarely be required for regular clean code! Leave it for legacy code. Mocking a mock to return a mock, to return a mock, (…), to return something meaningful hints at violation of Law of Demeter or mocking a value object (a well known anti-pattern).

Good quote I’ve seen one day on the web: every time a mock returns a mock a fairy dies.

Pauper tribal – A fresh MTG format

A tribal pauper game in cockatrice
A pauper-tribal example game in Cockatrice

One of the things I find most fun about Magic The Gathering is exploring deck construction under new constraints. Me and some friends are currently exploring a format which have these rules.

Rules

  • Normal constructed magic rules. Decks are 60 cards, players start with 20 life, 15 card sideboards, and so on.
  • Only cards that are legal in Pauper and not on the Pauper tribal ban list can be played.
  • The deck must at all times contain 20 cards from the same tribe. Tribal instants and changeling cards count towards this total.
  • The chosen tribe can not be Elf, Human or Goblin. This restriction is to ensure other tribes are feasible.

Ban list

The point of a format is to explore magic in an interesting way. If single cards dominate or are too obvious choices, they might end up here.

My thoughts so far

I really like that the constraints for this format are so simple. It is easy to make a format that is fresh, but where the constraints are complex. For some reason such complex constraints make a format feel inelegant and less appealing to me.

Babynavnvelgeren – Fast exploration of Norwegian given names

My good friend Feda Curic recently came to me and wanted to make something together during his parental leave. At first we started out with something a bit too ambitious (for the time we had set aside), but then his wife came up with a great idea we could finish in our limited amount of time.

She wanted an app where you could quickly look up names for babies. Here in Norway Statistisk sentralbyrå (SSB) has statistics for all names used 4 or more times. The SSBs user interface leaves something to be desired though. It is either small lists containing some names, or a name search, where you input the exact name, and then get statistics about that specific name.

The SSB interface is not at all useful for exploratory search for names. In Babynavnvelgeren (the app we made) we have a lightning fast interactive search and filtering options. The data is available locally in the app, so it mostly works offline as well.

The very simple but effective UI of Babynavnvelgeren

Babynavnvelgeren makes searching for names easy and fun. My 8 year old daughter loves searching while sorting for the least popular names. She found some very interesting ones like:

  • Sau – Sheep in Norwegian.
  • Lillebil – I initially thought this was a bug for sure, but it is a name. My daughter finds it hilarious (“Lille” means small and “bil” is car in Norwegian).

More generally I think Interactivity and responsiveness is essential for exploration, and the process of exploration often leads to accidental discoveries and new hypothesis building. UIs which support this kind of interactivity are very useful in my experience, and I think more applications/data providers should have this in mind when presenting their data.

I also think there is a lot of low hanging fruit in this space, and small improvements in interactivity often cause massive increases in efficiency. For Norwegian names, Babynavnvelgeren has got you covered. Go get it!

Astro observation log

As a kid I was always very fascinated by astronomical objects. I vividly remember the first time I observed the Andromeda Galaxy in a pair of binoculars. I think I was around 14, and I had tried to find it several times before. I did not really know what it would look like in binoculars, and I remember freezing my hands off to keep it in view once I found it. Even if it was just a blurry blob it was amazing.

Fast forward 20 years, and I now have the knowledge, time and money to buy and use a telescope. I recently bought a 10 inch Dobsonian. I am still learning to to use it well, but it has so far been worth every NOK spent.

Last night was my first observing session with the telescope under not extremely badly light polluted skies. These are the objects I have observed so far.

M35 and NGC 2158 (Open cluster in Gemini)

M35 is huge, and looks great on low magnification. It was also not very hard to find by following the stars in Gemini. It would be useful with an even lower magnification eyepiece to observe this next time.

M37(Open cluster in Auriga)

Since it is far away from the closest clearly visible star, it was a bit hard to find even though it showed up pretty clear in the finderscope once I was in the right region.

M42 (Orion Nebula)

This is so far the only nebula I have been able to observe from the city. Sadly Orion was not visible when I observed in the less light polluted location. I have not yet tried filters for contrast, but M42 is still very visible, and it looks great. Looking forward to observe this with filters and less light pollution.

M13 (Hercules globular cluster)

M13 gave M42 a run for its money. M13 was easy to find (shows up clearly in the finder scope), and at low magnification it looked great. Increasing the magnification made it awesome. Suddenly thousands of starts exploded into view.

M65 and M66, part of the Leo Triplet

With the light pollution at the time, M65 and M66 were visible but with little detail. They were not visible in the finder scope, and I just randomly found them scanning from the stars in Leo. They pretty much both looked like Andromeda in binoculars. I am hoping to see some more structure next time.

I also observed some galaxy in Virgo, but I have no clue which. It seems the area is filled with galaxies, and I found one just scanning around. Pretty neat.

A very welcome type-safe builders surprise

When Kotlin 1.0 came out, I tried type-safe builders a bit, with mixed feelings. They are a great way to create simple declarative DSLs for building hierarchies, but using them used to be quite a pain.

The main problem was that as the nesting got deeper, the DSL API would include methods from all the scopes. Only some of those methods were usable in the current scope. This was infuriating to use, since if you used a method in the wrong scope, it would often cause wrong behavior instead of an error. Here is an example using a simple XML DSL:

DSL gone bad example
A tag inside an attribute, WAT…

It makes no sense to have a tag inside an attribute, but this would not fail, but instead cause the below XML document to be created. It is valid XML, but probably not what was intended given the above nesting.

Resulting weird XML

Yesterday I needed to do create some XML, and I remembered type-safe builders to be great for this. I quickly whipped together the simple API above for that, but again was confronted with the scoping problem. Initially I figured out a very hacky way to provide nice errors at runtime, but then I thankfully read up on the Kotlin documentation, and in Kotlin 1.1, the @DslMarker annotation was added to ensure that only methods in the current builder scope is visible.

This moves type-safe builders from good to awesome in my book.

DSL now fails to compile
With @DslMarker annotation, the tag is not visible in the attribute scope, and this nonsense fails to compile.

I am late to the party, but this feels like a change that is easy to miss, and that does not get enough attention. Superb of the Kotlin team to address things like these!

♥ Princesses versus giraffes ♥

TLDR; I’m writing a coop multiplayer game with my daughter, this is the current result! Works in Firefox and Chrome. Use arrows to move and space to fire. Share a URL to play with a friend.

Some years ago, my daughter figured out I made some computer games, and she even played one of them quite a bit. After a while she wanted something new, and we figured we’ll make a game together. She would draw concepts and come up with ideas, and I would try to make them happen in game.

The initial concepts she drew were these:

We then together made them into vector with some modifications.

Princess and "giraffe"
A princess and a giraffe… I guess

Tips on kid friendly vector drawing programs would be very much appreciated, throw me an email or post a comment. We used Sketch, but Sketch is a bit overwhelming and distracting with all its features. I want a program which only have bezier patches and transformations on them, as well as fill, stroke and possibly opacity settings.

Going from concepts to a prototype

I had been wanting to try compile to JS with Kotlin for a while, so I started a project in IntelliJ and quickly threw something together using a plain HTLM5 Canvas.

We drew some more concepts, and after some evenings implementing we had an infinite randomly generated castle, an arrow firing princess, a hyperactive bow carrying giraffe, and a bunch of collision detection bugs (yay for rolling your own).

Wriggling out of hard requirements

After a lot of fun triggering bugs, my daughter came up with some new requirements.

I want to play with my friends, and we should all be princesses!

These are sort of hard to implement, disregarding networking, it would mean a total rewrite of how the world generation and camera worked. It would also need a solution for how to avoid someone getting stuck due to the camera movement of others and so on.

Those giraffes are in for a surprise.

After some bargaining we made some new concepts, and we agreed to add a player controlled cloud, and a bunch of new giraffes.

Adding networking

For me this meant that I would need to add some kind of networking to the game. For browser games, the choices are:

  1. Communicate with a server using WebSocket and have that relay state, or run the game on the server.
  2. Negotiate a WebRTC datachannel, and send communication directly between the browsers.
  3. Have players install a browser extension like netcode.io,and use it instead of WebSocket.

Since the game is cooperative, there is little reason to run the game on a server. Actually I really, really do not want to run the game on server, for a bunch of reasons, mostly for abuse and scaling troubles.

Using a server as a relay of state or input is also a bit funky, since it will introduce a lot of unnecessary latency. Since I am also willing to sacrifice some poor kids behind a symmetric NAT, I decided for option 2 and I have not regretted that.

I was cautious about doing this initially, since I had read this Gaffer on Games post which deemed WebRTC too complex, though that was in the context of server based architectures.

Having some more experience with WebRTC now, I agree a bit about the complexity, though I think it has gotten way better, especially with a more stable standard and more complete alternative implementations like rawrtc. I also ♥ how WebRTC abstracts away most of the P2P complications behind a very nice API.

Autorative peer or GGPO?

To share state in the game, I needed to come up with an architecture for networking. Initially I evaluated using something like GGPO, but in the end I chose to go with using the princess peer as an autorative peer, and sync the state to the cloud playing peer continuously, while the cloud peer only sends input. I chose this mostly for simplicity and time constraints. Since the game is cooperative, a lack of fairness is also not really a problem.

For the amount of work i put in, I am very pleased with how the networking worked out. Right now it is not tuned at all, just JSON over the datachannel, but even without tuning and no extra speculative integration, it has worked fairly well.

Where to go from here.

While the game is in a state of continuous updates, I think it is mostly just going to be small changes from now on. Maybe some sound effects and new graphics when we feel like it.

Rendering is currently also quite slow, and takes a lot of the frame budget. I would like to migrate to a framework with a WebGL based renderer. But sadly that seems like quite a bit of work, mostly due to using SVGs for graphics.

For future projects game projects, I will for sure start with a WebGL based framework, or possibly Unity tiny, and raster based images.

That is all for now, go and see how far you can get in our game!

One way to stop a 2D spaceship as fast as possible.

For a quite a while, I have wanted to try and create simple touch based interface for a 2D spaceship game. I want to allow the player to simply drag anywhere on the screen, and the spaceship moves to that position and direction in an efficient manner. Ideally the most efficient manner.

Spaceships in 2D games usually have one main engine that allows forward thrust, and some that allow rotation around the ships center of mass.

Moving from point A to B efficiently (in minimal time) is not trivial with such constraints, as changes to direction and thrust may have huge consequences for later possible movements due to inertia.

So instead of looking at the full A to B problem immediately, I wanted to look at something simpler first, namely to go from having velocity \(v_0\) and pointing in direction \(\theta_0\) to have 0 velocity as fast as possible.

The idea I use originally came from talking to a colleague, but something very similar sounding is mentioned in planning algorithms, though examples always seems to involve driftless systems. Anyway, my current approach involves these known quantities and assumptions:

  • \(a\) – Acceleration – The ship can only accelerate by a constant amount, and acceleration turns on and off instantly.
  • \(s\) – Turn speed – rotating the ship requires no acceleration, and the ship has constant rotation rate.
  • \(\theta_0\) – Initial orientation.
  • \(v_0\) – Initial velocity

These quantities allow me to find a legal, but very suboptimal way to stop. It simply involves to turn the ship to face its velocity vector, and then accelerate until it stops. Both the time needed to turn the ship \(t_a\) and the time \(t_m\) needed to turn and reverse the velocity are easy to calculate.

Turn until facing velocity and initiate burn at time \(t_a\). At \(t_m\) the ship has velocity 0.

It is also easy to see that this is suboptimal, it would clearly be faster, to start burning some time before the turn is fully completed, but the question is when to start the burn.

To allow for this freedom in my model, I therefore introduce a third time variable \(t_s\). \(t_s\) is the time to start turning and accelerating at the same time. \(t_a\) now becomes the time when I stop turning and only accelerate.

Turn and initiate burn after \(t_s\) time, at \(t_a\) time only accelerate. At \(t_m\) the ship has velocity 0.

Given these intervals, two integrals describe how the velocity will change when \(t_s\), \(t_a\) and \(t_m\) vary.

$$ v_x = \int_{t_s}^{t_a} a\cos(\theta_0+st)dt + \int_{t_a}^{t_m} a\cos(\theta_0+st_a)dt $$

$$ v_y = \int_{t_s}^{t_a} a\sin(\theta_0+st)dt + \int_{t_a}^{t_m} a\sin(\theta_0+st_a)dt $$

This gives two constraints, that must hold for all solutions of this kind.

$$ 0 = v_{0x} + \int_{t_s}^{t_a} a\cos(\theta_0+st)dt + \int_{t_a}^{t_m} a\cos(\theta_0+st_a)dt $$

$$ 0 = v_{0y} + \int_{t_s}^{t_a} a\sin(\theta_0+st)dt + \int_{t_a}^{t_m} a\sin(\theta_0+st_a)dt $$

The most efficient solution to this problem, is the \(t_s\), \(t_a\) and \(t_m\) triplet with the lowest value for \(t_m\).

This information allows me to formulate this as a optimization problem.

Since I want to minimise \(t_m\), the objective function simply becomes \({t_m}^2\).

This is subject to the two equality constraints given.

Since the objective and constraints are non-linear, I plug i into Optizelle which is a framework for solving non-linear optimization problems.

The implementation can be found on github, it uses autograd, to calculate derivatives and hessians. This is an incredible time saver since calculating 9 combinations of partial derivatives would have been a major pain, not to mention having to recalculate them whenever I did something wrong.

Running the program with inputs \(a=2.0\), \(\theta_0=0\), \(v_0=[2,0]\) and \(s=\frac{\pi}{2}\) returns:

running-optizelle

The optimal point vector contains the values for \(t_s\),\(t_a\) and \(t_m\). This means that for a ship with the given input, it should start turning immediately, then start the burn after approximately 1.43 seconds, stop turning and only accelerate at 2.43 and finally be at rest after 2.57 seconds, approximately 0.43 seconds faster then the naive version.

To test the result, I implemented a quick and dirty javascript program that simulates these choices and renders to a canvas:

Sometimes the ships end up drifting a bit after the simulation has finished. This is due to the discrete nature of the simulation not perfectly emulating the continuous solution (I do not integrate rotation analytically in the simulation). This could also have been a problem if I applied this style of planning to a game that did the same, from the simulation above it looks negligible though, which is great!

I am very happy with this result, it seems like it could work for the larger problem as well. The next step I’ll try, is to tackle some specific cases of moving from point A to B efficiently. For those cases there will be many more time variables involved, and possibly many constellations of safe initial starting points as well as possible freedoms to introduce in the model. It will be interesting to see how that works out.