Squadron – Next week of work

Operational_planning_for_the_2011_DRC_elections

Plan for next week

This post shares with you with insights from developing of  getsquadron.io

I managed to outline main feature that I want to have. Let’s kick off with something easy, those are main tasks to finish by the end of next week.

  1. Read simple Json configuration file that describe testing scenario
  2. Do a N requests against remote server, or you can specify frequency of requests
  3. Get time of all requests and dump into simple CSV file

My assumption for this week is that I should be able to perform very limited load test against one server. In future I am considering to add more servers. Of course if your application is load balanced you don’t need this feature.

Big believer of TDD

I am big believer of TDD especially in a projects that you start from scratch. It couldn’t be different in this project. If you are shadowing work and repository you will find more code in test solution than in normal project 😉

Abstract

As you might read in previous post I already outlined my requirements for that project, but finally I made clear abstract, what I want and that abstract will be always source of truth for my direction of work.

Test Web Application

Writing unit test is cool, but I want to have live organism to test against, that’s why I decided to write just simple web application that will help to emulate some of scenarios. You might already seen this project because I started series of blog posts about ASP.NET Core application.

Makefile and .NET Core

building-blocks-456616_1280

Overview

In this post you will learn what is build script or build tool, what products are out there and why it’s helpful during whole life cycle of you application.

It it a part of “Get Noticed!” series, check out how to create your first .NET Core application. If you already know, check out this repository

What is build script ?

Build script tools automate certain task that you have to do repeatedly with your
code, data store, application and other tools.
For example if you would like to get a clean run of our application (.NET Core).
You have to invoke four commands – dotnet clean, dotnet restore, dotnet build, dotnet run that is pretty irritating if you have to type that N times, over and over again.

What if you would like to get clean database each run ? Or let’s say that you would like to patch some meta data file with version every time when you publish new version of application or library, again very problematic.

You don’t really want to do it every time by hand, to be honest, sometimes you cant!
Like if your publishing or deploying process is handled by external machine, then it’s impossible to do such a step manually. That’s why we want to script it!

You will see later that you can automate various things, not only building your application but further steps in life cycle of your project

What is make ?

 

GNU Make is a tool which controls the generation of executables
and other non-source files of a program from the program’s source files.

(source)

In simple way, GNU Make allows you to describe to create executable file, but you will see that not only!

Where that can be useful ?

Project building

We mentioned above that getting a clean run, required from us a lot of commands typing, OK but hold on! Let’s create Makefile.

That way you can describe how to build all your projects. As you can see we just simply wrap up our well known .NET building tool commands with Make clean, restore, build, run .

Moreover we chained up those with one command run-clean, so now if you run make run-clean Make should run all other commands in order, at the end you should get running application, isn’t simpler ?

Make and Docker

I assume that you know what Docker is. If not there will be post coming in next few days, so stay tuned!

Let’s say that your application depend on PostgresSQL, for development you want to have clean database each time you run your application.

As you can see we added three new steps in our Makefile. run-with-db ,database-cleanup and  database-prep , again we just wrap up some well known docker commands, like stop and remove container or pull image and run container. At the end we chained up to run-with-db so that clean database will be prepared and then our application  will be run.

It’s not end!

Those two cases are just few where you can use Makefile to make your life easier. Stay tuned there will be more scenarios where you can use build tools.

Make is not just one tool out there, check out also FAKE, CAKE.

Setting up .NET Core project

Overview

This tutorial will work on most of OS, but I encourage you to get a Ubuntu, to go out of your comfort zone. It’s very easy to start with Linux. Working with console is very important to me, so that all work is done via command line.

Article is part of “Get Noticed!”, project squadron, so we will try to recreate what I have already done to Squadron.TestApplication repository

Install .NET Core

Installation of .NET Core is super easy, and I don’t want to copy what is already written so check out dotnet core website, and follow their steps. If you have any problems let me know in a comment below, I will try to help you.

Validating of installation

Once you installed framework, open command line and invoke

dotnet --version

you should get at least 1.0.1 (depend what time you are reading this article)

dotnetversionresult

Create folder structure

Main difference between standard .NET framework tooling and .NET Core. Previously you didn’t have to have folder structure, your projects could live in different places in your system. That information was stored in solution file.

.NET Core moved towards folder structure where you are not forced to have solution file that keep all information about your projects, so that you can open it in different IDEs or editors.
Let’s kick off with creating a directory structure, you can download full script here
cd ~/
mkdir Squadron.TestApplication
cd Squadron.TestApplication
mkdir src
cd src

Convention says that you should have two folders src and tests placed at the top folder of your project. For a now we just stick with src folder, because we are not going to create any test project yet

Initializing project

Make sure that you are inside src folder, then invoke in order.

mkdir Squadron.TestApplication
cd Squadron.TestApplication
dotnet new console

That will initialize your first project.

Base commands

There are 4 base commands that you have to know, for more commands just invoke dotnet --help in console.

  • dotnet new, initialize new project and create base files
  • dotnet restore, restore all packages
  • dotnet build, compile your project
  • dotnet run, run your project

After you initialized your new project run those commands in a order

  • dotnet restore
  • dotnet build
  • dotnet run

After last command you should get your first “Hello world”

What is next ?

Stay tuned in the next post will carry on this project and implement API based on ASP.NET Core

Squadron – First steps

Yesterday’s tasks

Those were my yesterday’s tasks, check out the outcome.

trello-kick-off

Measure people interest and start gathering information about customers

As you might have read in the previous blog post, I want to be very focused on building the product from the beginning. It means one thing, marketing and proper product building. I will try to use my “engineering” point of view to a minimum.

That means I have to deliver product features quickly and check if my potential customers like it, but how can I check if people like the product or a feature?

I am going to start with two tools, Google Analytics and MailChimp. For a now I will use just one main feature of google analytics, tracking how many users was on my website. That way I can simply know if people get interested in what I am doing or not.

MailChimp, that is simply to build up my database of potential users, I will share with them with some insights of my current work and they will get early access.

Those two basic tools will help you to kick off gathering very useful information and metrics. Every time you publish something or change, you will know if your changes or features drives more users or not.

The first iteration, make it simple!

The first iteration should be quick, it shouldn’t be perfect, it can be crap, it’s the internet you have many lives to launch a product, idea. If this time you will not be lucky you can try another time, and another time, and another time. IT’S FINE!

I took Amy’s advice to my heart, so that way I published getsquadron.io, in less than 2hrs. That includes:

  • Get a domain getsquadron.io
  • Setup repository
  • Setup & customise Jekyll website
  • Setup mail server & MailChimp mailing list

I keep most of my personal stuff on AWS, I know that tool well so I will stay there at least for a while. I got domain via Route53 and the mail server via WorkMail, I was kind of amazed how easy those tools are nowadays, especially mail server and domain it has always been a pain in the ass.

Outcomes

After 24hr from publishing getsquadron.io, I am able to say that I got some traffic and 3 new users at my mailing list!!

squadron-first-ga

mailchimp-first-ss

Books

Books that helped me a lot to understand what is actually a problem of product development and how you should do it if you build product or company that sell on the internet. Tech is not a problem at all, tech is just your toolbox.

Squadron take off – Get Noticed

Get Noticed

This is the time!

Unfortunately last year I didn’t take part in Get Noticed! If you don’t know what it is, checks this website, translate to English should do work.
In short, “Get Noticed” is a contest where for three months you have to work on
your personal project, that you come up and you write about it on your blog, how amazing idea is that?

The reason for that is simple Get Noticed! by other people, go out from your comfort zone!

Squadron as a tool for distributed load testing

The project that I am going to build is a tool for load testing. The main reason for that is I couldn’t find a tool or a product that would fit my needs. What I am looking for are those three main features: simple DSL that describe scenarios and endpoints under test, being distributed that works easily with major cloud providers, runnable in most of the OS and Linux distros

Technical goals are still in progress, mainly because I don’t want to be very rigid, currently, my trello board is in a big mess. I will try to share with you a draft of some goals. You will see how do I manage tasks, and goals.

On the other hand, my personal goals from this contest and project, are very different than usual tech person. I would like to that my main takeaway would be an experience in creating a product from scratch. That I could potentially make a profit. There will be a lot of knowledge not very technical but around that, like marketing-ish, SEO, customer tracking, OKRs. In general getting a first customer is much harder than coding, at least for me!

What technology?

Well technology is not really that important in this project, but we have to distinct here two different projects:

  • Core application that does load testing: it is going to be NET Core.
  • Website that allows you to manage your scenarios and work with cloud providers: that is going to be ReactJs and some free web templates

I chose “.NET Core” because it’s still my main platform that I work and it will be much simpler to create first iterations of that project. I strongly consider rewriting that into GoLang/C++ and some point.

Take off!

Good luck to everyone that take part in “Get Noticed!” you are amazing! I might show up at YouTube so stay tuned 😉

How to programming in GoLang on Windows

Recently I started programming in GoLang, for one project that I want to contribute to, terraform. For me, it was natural to get working GoLang on Windows.

So I downloaded GoLang, and terraform then I tried to compile and it happened…

Basically, my execution path got too, which is known issue on Windows.

That’s why it’s impossible to having working bigger projects in GoLang on Windows because you can’t even compile it. GoLang requires directory structure so you can very easily reach max limit of chars in your execution path.

Conclusion, it is impossible to even to compile bigger project….go with Linux or OSX

https://twitter.com/marcinnowacki/status/759509412310609921

A history of DDoS attack – How my server died

This blog post is postmortem of my infrastructure that was attacked on Sunday by Argentinian attacker and died because of DDoS. I will share with you all actions that I took in order to bring back stability of services.

Summary

Attack has started : 19 June 2016 at 3:20PM UTC

Attack has ended : 19 June 2016 at 4:10PM UTC

Users affected : 30-40 users

Extra cost due to attack : less than 2$

Existing Infrastructure

Let me give you a brief overview of existing infrastructure of Helbreath Poland.

cloudcraft - Helbreath Poland

On the image above you can see two components that create my infrastructure, it is Route53 and one small t2 EC2 instance.

Let’s agree on something, it is not difficult and “big” infrastructure, but for this purpose it works perfectly, right now the server has 30-40 people playing every second, possibly with this VM we can go up with 100 ? 150 more people ? But for a now it is fine!

DDoS what is that !?

In the simplest terms, DDoS is a type of attack that sends a lot of data from lots of places (computers), often you can say that it is distributed attack because you can use computers from a different part of the world to attack somebody infrastructure.

Aggressors send a lot of data so that your infrastructure can’t handle so many incoming packages, and eventually will stop working or access will be very limited. This type of attack doesn’t require really big knowledge, everyone who has access to the internet can prepare that kind of attack. Deeper explanation you can find at Wikipedia.

In our case, they were attacking two ports: 321, 1 and 3007 over TCP.

Full story

The calm before the storm

I knew that something will happen because there was a player that log in to our server and threat to us that he is going to destroy the server. Well in after 5 minutes, people start getting lag, and more lags, then a lot of them got disconnected from the server.

So it begins, my actions

The server was basically killed, VM don’t respond.

As a said earlier, people got lags, disconnected. I started doing some investigation and I tried to log into my VM but…yeah, I couldn’t even do that. RDP wasn’t responding.

Decided to switch off all incoming traffic, and allow only to my IP.

I decided to switch off all incoming traffic, which means that VM is taken out from public availability. That way I cut off all incoming good and “dirty” connections.

I have done that by changing a rule in a security group, as on the image below. First rule, All TCP, Anywhere has been removed.

security-group-changed

While all traffic is disallowed except my computer. I can log in and maintain a VM. Which means do backups check logs what happened, look for attacker IP address.

Gradually open traffic but it seems that there is still an issue.

Next decision that I have taken was to start gradually allowing incoming external traffic to my infrastructure.

But as you can see below there was second hit even greater. Between 1530 and 1600 was quite calm, but then when I allowed around 16:00 was a big bang.

network-in-attack

 

Again, repeat first step switched off everything and let’s wait…

Check IPAddress of attackers

In a meantime, I was looking for IP address/es of attackers, and I found that attacker was from Argentina.

Add entry to ACL with IPAddress, decided to block attacker their entire subnet

To prevent and block any dirty connection, I have updated ACL that manage and filter out any incoming and outcoming traffic. I decided that the safest option is to block their whole subnet.

acl-blocked-entire-subnet

Again start letting incoming traffic to infrastructure.

At the end around 16:05 I started once again to letting people into the server.

You can see on the image below incoming network. From about 16:05-16:10 send data is on a fairly OK level if you compare with what was 20 minutes before.

People can log in and they don’t have any problems with the game.

healthly-situation

Problem solved, what next ?

That was my quick story what happened to me on Sunday afternoon. Problem solved but what about further actions to prevent, or maybe create a failover plan, that can at least allow people to play ?

Introduce Load Balancer

First of all, what I have to do is to introduce a load balancer (ELB), even for this one VM. In the future if I will notice that attack is incoming I can immediately spin up fresh VM and redirect every player to this box. In a meantime, I have some extra time to deal with attackers.

cloudcraft - Helbreath Poland V2

 

Let’s imagine that attack is incoming, and a middle box is affected, so I immediately spin up two VM and fire up services on this boxes.

Because players connect by DNS, and it will be a stream for ELB, they will be automatically redirected to healthy instances. Of course, this way won’t help if an attack will be really, really serious and they will attack directly DNS

Monitor incoming connection to get a better overview.

This is very important! My infrastructure didn’t have this at the point of attack. If this attacker didn’t log into the game and threat to us. I wouldn’t know explicit his IP Address. Which could complicate things and probably it would take me much more time to solve this issue.

With help comes Flow Logs in AWS for your VPC. This monitor and log all IP addresses that are connected to your infrastructure. That way if they will attack again, from different subnet I can get their IP addresses from logs, then block traffic of this subnet to my infrastructure.

flow-logs-vpc

Set alarms on the usage of VM resources.

This part is also very important and it is going to play nicely with previous steps, so on AWS you can set up alarms if a specific resource is going to beyond of a certain threshold.

ddos-detector-alarm

In my case, I have created an alarm for data send to my VM by the external world. The alarm will go off when there will be a spike to of incoming data greater than 1GB a minute then it will send me a notification to an email. That way I can be aware of the possible attack or big popularity of my server 😀 and jump into action

Closing words

To sum up, that was a really amazing experience even if some players were affected and I was really pissed off, but I treat this as a lesson because I have learnt a lot of additional functionalities on AWS, and general ops approach this problem.

Refactoring legacy GUI application to CLI

If you ever wonder how it is to work with 15-year-old legacy C++ code, and how to make refactoring, this blog is perfect for you 🙂 

As ou may remember, I promised you to show you work that I do for Helbreath. When I decided to work on that, the first decision I made was to try to get rid of this horrible GUI, that was aperitif before I do more serious work.

Before we dive into C++ code, let have a discussion why GUI is evil in your backend server applications, shall we ?

If backend service, CLI only!

Be clean

The first argument is that your code is much cleaner because a program doesn’t have an unnecessary code, which is responsible for drawing and behaviour of your GUI, additionally you don’t mix context of GUI and context of your service. Which means that you don’t have noise in your code.

Fewer resources and dependencies

Another very important argument, your server will need fewer resources to run your application. Even you can run your OS without GUI in headless mode.

But, wait what with fewer dependencies ? If you don’t have GUI your code immediately has fewer dependencies to external libraries, pure profit! That way you don’t have to manage additional packages and worry that something won’t work on “very special” environment or OS settings. Moreover, developers who want to work on that project are less likely to get problems with the project set up.

Automatisation and ops work

The ultimate argument that you have to read and it applies to any software in production for more than 1 people.

Having you service as a CLI will help a lot with ops work, with CLI service you can automate everything from deployment, templating, to a startup of your application. Whereas with GUI application you can’t do that very easily, due to involved manual steps.

Next important argument – remote access.

Ideally, you don’t have to have access to the whole server/VM/machine to maintain your application, instead, you can easily connect to this application remotely and manage from your computer. This approach is more secure, we are avoiding direct access to a server and we also can whitelist IP address with specific port.

Refactoring time

Old Way

Let’s move on to our services that make Helbreath server running.

old_way_hb_services

Above you can see how two services looked like before refactorization, it was horrible GUI, lot of manual steps, such as providing username/password to the database. Each time you restart application you have to manually put credentials to the database. Imagine now that I do 20-30+ releases a day, it means that I would need to waste my keystrokes each time.

New Way

new_way_hb_services

In the other hand, this image above shows the current state of both services. It is much more beautiful, isn’t it ?!

Pure console, with some output information and nothing else!

No dialogue boxes, no fucking buttons, no weird messages. Just pure console.
But HOLA! wait! How can you see what is going on with your services ?

Easy answer!

Logs, Luke log everything!

In this case, I log everything to file and then use nxlog to send to papertrail.

papertrail_log

Now let’s check some code!

At the beginning I created a story on Github, just to have some place where I can track my work, and then pull request.

This is very important, every refactorization in legacy code (this is almost 15 yo) is big and difficult. At this example, I will try to share some of the mine strategies.

Make a research!

Spend a good time to analyse code and dependencies. For this specific problem, it took me like 1-2 days to understand a problem and come up with a solution. This is very important for young junior developers! You are a problem solver, not a code monkey, research is part of your job, don’t worry if you send one or two days on researching something.

Since I wanted to get rid of GUI, I had to check which part of code has the dependency to GUI the code or libs. So as you can see here and here I listed out all main places where GUI sits.

Cheat, Wrap, Hide!

My next advice is, cheat if you can, don’t refactor everything at once!

Do small bits until your old code will be so granular that you can understand the domain, and rewrite it. In this commit, I wrapped all the things into a new class.

I cheated because it still has GUI dependency (to the HWND class) but it’s hidden. But at least it doesn’t have code for dialog boxes, buttons etc.

Remove, Remove, Remove!

Most enjoyable part is, removing unused code and here, as you can see I removed a lot of graphics drawing specific stuff, which is not in use anymore.

At the end of this refactoring, I still have some dependencies to GUI, mostly to HWND class, but it is necessary to run a service because old windows messaging use this library to create async calls via TCP/UDP. Yeah, you read that correctly messaging require GUI dependency, total madness. It ended up with fairly ok refactored code, I don’t need any manual step apriori to run a server, everything is automated. I am ok with that for a now.

Helbreath Poland project

hb_blogpost

What is Helbreath ?

Helbreath is MMO games created back in 1999 by a korean studio – Siementech, seems that they are dead 🙁

At the same time there was a group of people who created open source code for this game, both server and client side.

This is very important, to understand that those sources were developed in 1999 / 2000 so some approaches were really good at that time. Now some of these approaches can be obsoleted.

What am I doing ?

I took sources developed ages ago by the community, and I put them on Github, and start fixing issues, refactoring some code and adjusting to standards. You can expect series of blog post on this topic.

Why am I doing this ?

Well, it is quite personal, because I have started my programming journey from this game, back when I was 13 or 14, that was my first MMO game and then I decided that I want to make my own server (this is an archived website of the server back 2006).

Well, at that time I didn’t know that I have to learn programming to even start my own server. I downloaded sources and yeah…I had to learn C++.

I read few C++ tutorials and it was a painful journey, like really painful as far I can remember most painful part was to understand classes, objects, reference, pointers.

It took me like two weeks to setup Visual C++ 6.0 (yeah something like that exists), and then I immerse into C++, even to that point that I haven’t been learning at my school (almost didn’t pass to next class) because every second I was thinking about programming and my “server”.

What is my goal ?

First of all, I want to clean up and refactor current sources, fix all critical and major issues/bugs.

This series of blog posts can be somehow a guide for juniors developers because I will show you few things that you shouldn’t do when you are writing your applications.
Then, I want to run a server for people to play, and check performance, and give back something to community of HB, and of course I am big fan of this game so I will play 🙂

My very, very end goal it to have at least one component of the server, rewritten in any language so that it can be run on Linux. Moreover, the domain of whole game is not clear and I want to write down documentation and get more readable code.

Conclusion

Stay tuned because a lot of content is coming! I spent last month to make it happen, I did few snap-storms on my snapchat about that. It’s ging to be amazing to see this transition.

Terraform in game development – Don’t Starve Together

dont_starve_together_blogpost

A while ago you could read about Vagrant for Don’t Starve Together (aka DST), this time let’s discuss about similar tool named Terraform.

As in vagrant blogpost I am playing Terraform with DST, because it’s quite good aspect and problem were you can check out tools like Terraform.

You may wondering what do you have with those all games ? Games for me were *THIS* thing that I started programming, it all has begun from Helbreath, were as a 13 years old I wanted to create my own server so I had to compile C++ source code, it was challenging for somebody with zero knowledge of C++, so that’s why I still love doing private server and giving back to gamers community.

Terraform is a tool that allows you to describe your infrastructure as a code, it means that you can write json-like description and it tells to terraform what it has to do to spin up required parts of infrastructure to achieve fully working environment. This tool is ideal for any kind of cloud provider it supports, Amazon, AWS, DigitalOcean

I have created terraform configuration for DST. Main place where my infrastructure described is vm-dontstarve.tf

This block describes that we going to use AWS provider, with access_key, secret_key and specific region.  ${var.aws_access_key}  and ${var.aws_secret_key} this is how you define variables.

In this part, we can see block that has information about AWS instance,  what AMI to use (image of OS), what type of instance, ssh key name, your custom security group and additionally how to connect with our VM during provisioning step.

Here we describe provisioning step, we can copy file, execute remote command, or execute command locally.

Next one aws_security_group block, which describe your topology of ports and connections between your machines. DST requires port 10999 UDP protocol as incoming traffic so we have to add ingrees rule, in “egress” we specify that we allow to any outside traffic on any port and protocol.

Last security rule (ingress) open SSH connection over 22 port, require to connect with our machine and to do provisioning.

For a long time I was wondering if tools like Vagrant, Terraform could be helpful in game development especially to maintain server infrastructure, and in my opinion they are! Soon I will blog more about hashicorp products and their usage in game development.