Spawnfest 2017!

Last weekend (9-10.12) I took part in SpawnFest2017 and I’ve got some thoughts to share

What is it?

SpawnFest is a hackaton (programming challenge to write a project in some limited time) that was all about BEAM – Erlang’s virtual machine. All projects have to run on top of that (so you had to write it in Erlang/Elixir/LFE or something other from this family). You could add JS for frontend, use some third-party libs, etc. This was first edition in 5 years, so I had to take a part as I’m big BEAM enthusiast.

Warning, personal stuff below!

It was my firs (organised) hackaton. 48h deadline was quite stressful, and a lot of things didn’t work. Being at beginner+ level didn’t help either. But it was so much fun! I worked with my 3 friends on silly idea that we managed to bring to life. I learned a lot about not only Elixir, but developing and working under that kind of a stress. Memes were created and we never lost good spirit. A lot of code created was poor quality, but it didn’t matter. We did something in language we’re excited about, together, in less than 48 hours. That was magical. I cannot recommend enough taking part in SpawnFest – it was well organised, everyone was welcome and all the good things I cannot describe with words. It felt really awesome! Funny thing, when this all was over I was reminded that there are winners and prizes (Judges are still voting), but for me I already got the best prize – which is experience from this event

So, what did we do?

Application to monitor plants. We planned to have thermometer and proximity sensor, but we didn’t manage to get it work in time, so we added buzzer to hydration and humidity sensors. This app (written using Nerves) communicated from Raspberry Pi to our webapp written in Elixir, that was deployed in Heroku. What was shocking – how damn fast it was! Below I’ll paste README description of short video demo that we made.

There’s Elixir app with Phoenix fronend opened, showing sensor output. Sensors are connected to raspberry pi. When sensors are dry it will show cactus, when wet it will show water drop. First there’s humidity sensor – if we spray it, the second image will change. As it was not dried properly you can see some changes later, as water drops flows down the sensor. Next there’s hydration sensor put in the glass of water – first image will change. Below the images are charts with sensors data grouped by hour.

Finally “warning” button is pressed, and buzzer turns on. “Warning” button is a switch, so pressing it again turns the buzzer off.


See you next year! 🙂

Today I Learned #2

While a back ago I did a little test. I read the Deliberate Vim book, did the exercises and decided to go full Vim. So I installed ViEmu to my VisualStudio 2015. Aaaand had a few struggles. Some shortcuts conflicts that I had to solve manually and still it wasn’t so convenient to use.

I ended up skipping ViEmu for VS2017. But it didn’t last for long – one day I noticed I’m really used to some of the Vim commands and it’s more difficult to work now without them. So I did some research, and got a great recipe!

If you have anything keyboard-changing installed already (like Resharper) – reset all shortcuts to default – so you get only VisualStudio’s default key bindings. After that install ViEmu and let it take all the shortcuts it needs. Finally, install Resharper/apply Resharper scheme. Those are the steps that will provide minimal friction while working with Vim in VS.


Bonus round: For VisualStudio Code – just install a plugin, it works great!

PureScript: The First Look

After ElixirConfEu I decided to try PureScript. Partly for yet another frontend try, party because it looked interesting and partly because I wanted a little break with something way different and new.

I read a bit of this awesome book and did some of the exercises. Here are some first thoughts:

The bad:

  • Install npm, to install bower, to install dependencies. I get it, it’s JS. It’s frontend. It’s too young to have its own package manager and it’s even better that it uses a common tool. But come one, you can do it better than this. Especially if apparently you can compile to C/Erlang/someothers – not only JS.
  • Haskell-like docs. It’s not the worst, but it’s really not newbie friendly.

The good:

  • Types! This is the most awesome thing, really. You specify what type goes in, what goes out. It’s marvelous, especially for someone with a strong C# background.
  • Quite obvious, FP approach
  • Book (mentioned earlier) is a great learning resource. It’s free and written by PS creator
  • I know that it’s not something crucial, but I really like the syntax
  • How easy is to start, VSCode tools are great, you can google stuff and get some answers already
  • The community seems small but nice
  • Error messages are really, really helpful!

As you can see, there are way more “goods” than “bads”. Should you try it? Definitely? Should you use it in your pet project? Sure! Should you use it in production? It depends 😉 After going with dotnet core RC1 in production I’d say “hell yeah” but this requires the team that wants (not “can“; want!) to handle it, so my answer here is “it depends”. Nevertheless, I’m hyped and will do something more with it, but the break is over and I’m heading back to Beam world now.

Property testing

During ElixirConfEu in Barcelona, I learned about Property Testing. It looks pretty neat and it got me interested. Basics sound quite easy but there’s more than meets the eye and I’ve been reading/listening about it for a while.
As I don’t feel comfortable enough to do a deep dive into the topic I will do an introduction to it. After I get a deeper understanding with some “real life” examples (or maybe doing them myself) I will write a follow-up.

Property testing is a term originating from Haskell lib called QuickCheck. It was created to ease the pain of writing many tests. Instead of writing n specific unit test you can generate them.

Using QuickCheck (here is the list of ports to your language of choice) you define a property of a piece of code you’re testing.

For trivial example – if you were to write your own ordering function you can define few properties – if you order it twice the result won’t change, the only change is the position of elements (so you don’t hanger values) and so on.

QuickCheck then generates data, runs n tests using this random data and if it finds failing case it executes something called shrinking – trying to find minimal failing case. It can ease up debugging or seeing straight away what’s wrong.

While it’s all fun, I’m still not sure what are the cases in a commercial code where this is the best approach. Also, turns out that properties also form kind of patterns – and I’m yet to learn about all this.

Nevertheless, I’m quite hyped and want to learn more – it seems more of easy to get, hard to master useful tool than a novelty, but only time will tell.

Integration series: Messaging

Last time we spoke about some integration methods we can use.

As we see, there are methods that are not so tight coupled, being able to generate lots of little data packages (like file transfer), easily synchronizable (like shared database), details of storage’s structure hidden from applications (unlike shared database) and being able to send data to invoke behavior in other app (like RPI) but with being resistant to failure (unlike RPI).

And here messaging comes to play. The rules are simple: you create message, send it to message channel and someone waiting for this kind of message will get it. While it has some problems on it’s own, it is reliable, fequent, fast and asynchronous and.

  1. Being asynchronous means you won’t block process while waiting for the result/answer. Calling app can continue with it’s work.
  2. Decoupling. Messages will be sent to message channel without knowing almost anything about receiver. The common interface are the types of messages sent, not the bidings between apps. It also allows separation integration developement from application developement.
  3. Frequent, small messages allow applications to behave almost immediatly by sending more messages.

And many more we’ll explore in the series. Why I will write a series on it? The main disadvantage of messaging is the learning curve. While other methods are fairly easy to use, messaging and async thinking is not something we’re used to. But once learned this concepts will help you not only when integrating lots of enormous applications. You can also apply it to “integrate” classes/functions/actors in your code.

Introduction to integration

I started to get more into integration and integration patterns. There are few reasons:

  • Open Settlers II will be created with integration with possible UI integration in mind
  • It will be helpful in my daily job
  • I feel that it’s an important topic in software engineering

Having this set up, let’s briefly talk about some integration methods.

File transfer

We want two (or more) applications to exchange data. We can use simplest solution – write it to file for others to read. (Almost?) every language non-esoteric lanuage has some file read/write function built in. It is also easy to do no matter what environment you’re working with. Coupling is not so tight as application devs can (should?) agree on common file format(s) to work with. Changes in code won’t change the communication as long as output file is the same. With json it’s easier than ever. Even with third party apps it’s still trivial to consume messages from software we don’t have influence on.

There are also some downsides, too. There is a lot of work with deciding on file structure, file processing. Not too mention storage place, naming conventions, delivering file (if one app doesn’t have rights to output location of other), times of reading/writing (and what will happen if one reads while the other writes). But it all is nothing compared to one big problem. Changes propagate slowly (one system can produce file overnight, after “collection” of other). Desynchronization is common and it’s easy for corrupted data be spread before any validation (if it’s even possible).

Shared database

Shared database is a remedy for the synchronization problem. All data is in one central database, so information propagates instantly. Databases also have transaction mechanism to prevent some reading/writing-while-writing errors. You also don’t need to worry about different file formats.

But it also comes with a price. It’s difficult to design a shared database. Usually tables are designed to fit different applications and are a pain to work with. Worse if we’re talking enterprise level solutions and some critial app – its needs will be put higher, making work for others harder. After creating database design there’s a tendency to leave it as it is – changes can be hard to follow. Another problem is third-party software. It will usually work with its own design and it may change with newer verions. Database itself can become a perfomance bottleneck

Remote Procedure Invocation

Sometimes sharing data is not enough, because data changes may require actions in different applications. Think changing address at goverment service – there are a lot of adustments and documents to be generated. Apps maintain integrity of data it owns. It also can modify it without affecting other appliactions. Multiple interfaces to CRUD data can be created (e.g. few methods to update data, depending on caller), which can prevent semantic dissonance and enforces encapsulation.

It may loosen the coupling, but it’s still quite tight. In particular doing things in particular order can lead to muddy mess. While developers know how to write procedures (it’s what we do all the time, right?) and it may seem like a good thing it’s actually not so good. It’s easy to forget that we’re not calling local procedure and that it will take more time or can fail due to multiple reasons. Due to this thinking also quite tight coupling arises (as stated before).

As always, there’s always a tradeoff. But do we have the best approach here? Or can we do even better? I’ll address these questions in the next post in series.

Chicago vs London TDD

Somewhere near my very beggining of my software engineering journey, as a fresh Junior, I happened to talk with a collegue of mine. I remember him saying:

So, when someone says [during the interview]
-I know TDD!
I ask:
-So tell me the difference between Chicago and London style.

This happened to be The Great Filter, as many people didn’t know that. Luckily this wasn’t my interview as I didn’t know either. So, naturally, I did some googling.

It turns out, that it’s not rocket science at all.
Let’s say we’re testing the metods talking with DB (using some injectable context, of course)
Chicago style focuses on results. So here you check, if result that you get back is the same as expected.
London style focuses on behavior. So here you mock the context and then validate if methods you need to call were called defined number of times.

Chicago style focuses on results. London style focuses on behavior.

So, it’s easy, right? Also – you can also mock in Chicago style, and by getting the results you want test behavior, right? And why is it that important?

I’ll start with second question. If you start mocking around and check for results, your setup/arrange parts tend to grow and be more and more complicated. Also, if you want to test results, you have to provide some sensible data. This makes testing more cumbersome than needed and results in greater reluctancy in writing them. In my opinion those kind of tests work best with integration/end-to-end tests and also with unit testing, were you have no “complex” (or maybe any?) side effects. Pure functions are great example – for same input, always same output, without any side effects. It’s very easy to write those tests and arrange part will be small, if almost not existent.

When you verify behavior, you don’t care about carefully setting up mock, populating data, thinking about complex relations. You just want to know that system behaves in a way yout want it to behave. Databases, messaging systems, IO operations etc are fine places. You have other kinds of tests to check if your system is working correctly with those alive elements. Here you want to check if you handle them correctly.

It’s easy, yeah. Don’t seem that important. But it’s really easy to forget London style and check for result everywhere. Writing tests starts to be painful, they take more time and, out of nowhere, you’re dug under pile of complexity.
“But I did unit testing, why is this happening?!”
Because maybe there’s more to writing tests than just Assert.AreEqual(expected, actual) 😉


PS: Both ways are equally important and have their own purpose. Don’t just focus only on one and you should be fine.