I design and develop back-end software for UNIX/Linux and (real-time) embedded systems.
There are few steps between a software idea and the final solution.
Sometimes it is not clear if a given concept is possible at all. For example if using certain types of software for a given embedded system would work. Will the communication between the components be fast enough? Before diving in and preparing software architecture that relies on some assumptions, it makes sense to clarify those ahead of time.
Once the concept is clear and biggest question marks removed from the picture (via feasibility study) the software architecture can be designed. This is basically a translation of a business idea to a technical document describing in details how to make the idea reality.
Proof of Concept
Knowing how a project could be done on a technical level is a starting point. A proof of concept is a simplified implementation of the architecture (for example instead of supporting 30 commands, there are just 2; instead of reading hardware sensors - the code uses mock objects, etc.). This allows checking if the concept holds and allows adjusting the course of actions.
Once it is proven that the idea works the final step it to follow the design to the end and implement all needed features. This of course includes not only the code for the solution, but also tests, which are in fact a kind of executable, living, specification of the system.
Does it sound like something I could help you with?
(Very) Technical details
If you want to see all the details regarding used tools, technology and approach to that then here they are...
This falls into three categories:
- CLI programs
- they take parameters, read data, process it, produces another data and exit. See more...
- a script which changes names of photos made with a digital camera according to a timestamp found inside each file’s EXIF data;
- a program which checks if there is new data on the Internet (job offers, real-estate, etc.) and sends a notification about it. This could be executed periodically at given time of a day;
- tests - programs for ensuring that the final product does what it should do (or precisely speaking: does what a test-writer expected it to do). Usually automatically executed (e.g. in the night) to deliver some sort of report or update a state of a project in a database (so that project state can be monitored);
- software libraries
- piece of code providing functionality for other programs. See more...
I put into this group two types of libraries: those which provide some functionality and need another program to execute it and those which provide some functionality and execution skeleton. In the latter case a user-supplied code just needs to "fill the blanks", i.e. user needs to write functions that are called automatically.
- a library which provides functions and classes for accessing data from a sensor (like a temperature sensor);
- a program skeleton which expects user to provide a function which is then automatically executed in right moments;
- background software
- programs which once started, just run and process data on events (I/O or time). See more...
In this category we have software which communicates to other programs via IPC or network, acts upon received data by changing its state, delivering results, saving something to a database, file, log, etc...
- a web back-end which serves as Fast CGI daemon, which receives HTTP requests, process them and deliver results via RESTful API to the front-end or another tool;
- a WebSocket server, which delivers live-data about state of a device (temperature, CPU load, memory consumption, etc.) to connected front-end clients;
More than a decade I spent working in road traffic industry developing software for small embedded Linux devices (much weaker than for example Raspberry Pi), so paying attention to available resources was a critical factor for success.
I did (and still do) a lot of shell scripting to automate repeatable tasks and in general as kind of glue to join pieces of software together.
I'm also familiar with LaTeX and I'm using it (or propose to use) for all documentation which should be stored in a text form and where changes should be easily trackable.
Sorry, but I don’t do windows, so technologies popular on that platform are out of my scope.
- Libraries, frameworks
For C++ development I prefer using well established standards like STL or Boost - if possible of course (on small embedded devices it is not always the case). I have a lot of experience with ACE library, but I gradually move to Boost for things like networking, since it seems that Boost is being more actively developed and keeps up with the language development too.
For building C/C++ projects I prefer to use CMake or Make if needs to be. For testing Google Test (including Google Mock) is usually the first choice although lately I used Catch2 with FakeIt and it worked very well.
Regarding Python web frameworks I used Django and Flask. I am more inclined to use Flask, especially for projects which are not so much database oriented. Flask is simply more light-weight than Django.
- Linux administration
- Linux Debian setup, WWW (Nginx, Apache), SQL (MySQL), firewall (iptables), Docker, etc. See more...
Over the years I spent a fair share of my time doing GNU/Linux administration (and I still take care of my own servers), so making a full deployment (installation, configuration, fine-tuning, securing) of a GNU/Linux Debian server is something I can do too. It means things like setting up the basics first (ssh access, firewall, permission on the system, etc.), web server (after years of using Apache I’ve been using only Nginx since 2011) and databases (MySQL). All that is rounded up with shell scripts for automating as much as possible.
2018 I used Docker for creating a network of simulated embedded devices for testing a centralized web system which should interact with them. This allowed testing the web system with a bigger number of devices connected without physically having any of them.
GNU/Linux development environment
I develop software in GNU/Linux environment.
Over the years I had many opportunities to use different operating systems (different flavours of Windows, different types of GNU/Linux, FreeBSD, OpenBSD, Solaris to name a few). The UNIX philosophy resonated the most with my own approach to the software and I found the UNIX way of doing things simply very elegant. In the result since 1999 I've been using Debian GNU/Linux operating system on my computer as the only one (not as dual-boot) for all my professional and private projects.
I develop software for GNU/Linux and embedded systems.
Since 2004 I’m developing software which needs to run on embedded Linux devices (similar to Raspberry Pi, but many times slower and with a way less RAM), so over those years I gained a lot of experience in this area. In some cases a software had to comply with soft real-time constraints, which required changes in the kernel, in the setup of the whole system and in the software itself.
Since 2015 in my spare time I’m writing code for arduino and esp8266 - mostly for fun of it - usually in C, but on esp8266 I’ve used MicroPython as well. Since the end of 2018 I've been working on software for stm32 platform using FreeRTOS.
The last time I had an opportunity to write a code for Windows was back in 2003. My task was to write a library (DLL) interfacing some hardware device. In ended up so that I developed the shared library in GNU/Linux, tested it there and then the source code was only compiled in Windows. It was possible without changes thanks to ACE library which shielded the logic from underlying platform.