I’ve been loooking at mini PC’s. At home, I have a large, and quite power hungry server that I use for running all my virtual machines.
Whilst useful – it does take a considerable amount of power – about 250W all the time. With the cost of electricity these days it’s a not inconsiderable sum. 6kWh is what it uses a day, which is £1.50 a day, £10 or so a week, so £520 a year.
I needed to find something better, and this little Mini PC from MSI caught my eye. It also made me realise that this is the perfect device to recommend to people to set up their own home lab for dabbling with AWS and related Devops things…
Now if you have been following my ramblings, you will know that I am a great proponent of learning by doing – and there is no better way of learning this than getting your hands stuck in and well, doing stuff. The only trouble is that it can be a bit expensive…
Amazon do a fantastic job of the free web tier meaning that you can get your hands on some excellent things totally free of charge and actually use it for real production workloads. This is good – and if you dont have your free tier account set up you really should go and do it right now. The only problem is that it is a bit limited…
You see, what you really need to learn properly is something like GitLab – and that needs memory. AWS are quite understandably not going to hand out machines with 4 to 8GB of RAM for free, and that is what you need to really test Gitlab etc… so you have the problem that you can either suck up the cost – which isn’t much, or struggle with the free tier and have a lot fo frustrating out of memory issues to solve.
Theres also the problem of using up the free tier as well – if you fire up an EC2 instance, and it crashes, then you have just used an hours worth of run time. Remember that EC2 instances are charged by the hour and the free tier is 720 hours a month. If you crash your EC2 instance ten times and have to restart it – each time that’s ten hours of running gone, even if the instance was only up for a few minutes.
So you either pay for an EC2 instance – or pay for the electricty to run at home – as well as the capital cost for the server which could be considerable…
Enter the N100 from Intel.
This is the processor that uses a mere fraction of the power of anything else, and is still performant enough to get a lot of work done. My little cube computer sips about 11W of power – at full load! The running cost of that is nothing, it’s no more than a couple of energy saving lightbulbs. About 6p a day, about £30 or so a year I think it worked out at. The complete computer cost me no more than £300 with memory and storage, and I will get that back in less than a year just from the power savings from not running the old server…
To learn Devops – you cannot beat learning by doing. So, I suggest that you get a small computer, get proxmox installed on it, and then install the infrastructure that you need to do this. Get Gitlab installed, and get a Linux instance installed with the AWS CLI on it. Start poking at AWS with Terraform that your CICD pipe runs. This is how you learn..
I’ve shot a couple of videos showing the inside of the MSI Cubi, and how I configure the disks on it to support Proxmox. The rest is up to you to learn and puzzle about – but this is an excellent starting platform to learn how you can use and lever AWS to enhance your career. I will build upon this and show how you can extend your journey into AWS in future posts. I would have to say though – please excuse the quality of the video and the sound – I know they are not exactly ideal and any info or tips on improving them are warmly welcomed!
People ask me how to start off in IT as a devops engineer. If you’ve asked me and been referred here – congrats. You are one of the people that made me sit down and actually write out the reasons why, rather than just mumble some half thought out response.Before we go any further though, this post is going to be pretty Amazon Web Services heavy. There are other cloud providers out there of course, but AWS to me is the one that offers the greatest breadth of services, and use for most every situation. So please don’t thjink I’ve never used Azure, or GCP – I have, but for someone starting out, AWS is the clear winner.
So you want to become a DevOps engineer? How long you think that is going to take?
Well, it’s decades. At least. Sorry to break that to you. Lets look at why that is the case though..
DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.
DevOps uses an amazing variety of tools and skillsets to deliver what it promises, and that takes a lot of skill and commitment. Devops engineers come from a development or a sysadmim background typically – and they are generally highly competent in those. To get there takes time, and until I started looking at the enormity of what I needed to know, I didn’t appreciate fully what was needed, and how long it would take me to learn it again from scratch – so I wrote a roadmap of what I needed – as a minimum to know.
You can see the map here, attached as the graphic. Everything on here is a needed item – you don’t have to be an expert to understand them, and to be honest no devops engineer is going to have truly deep knowledge of all of them. But you do have to have a good appreciation of them, and know what the tooling and technology does, and where to get answers when you need to go looking. As a devops engineer you are expected to know this, to be able to fix this, and generally the expectation is that you can, and will do so.
Each section has my recommendations for what you want to learn, and some useful resources for doing so.
Lets start with the first.
Coding
This is the prime requisite. If you cannot code, you cannot develop things, and that is a pretty poor situation for someone whose title starts with “dev”. You need to have a good coding base, and these days it is easy to make a recommendation. It used to be a toss up between Python and Java, but now, with the seemingly unstoppable rise of Python in big data, this is the easy option to make.
Learn Python, learn it every day, use it every day to solve problems and develop code snippets. Once you have Python covered, it will be far easier to pick up more formal languages, like Rust and Go as well as Java.
Operating Systems
No matter what code you write and develop, at some point you are going to have to run it on an operating system somewhere… You can go serverless, and that is a possibility, but even the best serverless systems tend to have a few places where a long term task needs to run, and that implies an operating system somewhere. Most people will be best served learning Linux to start with, which then begs the question as to which distribution. I would in my experience say that about 90% of all commercial use cases run on either RedHat or Debian, with the remainder using SLES. It is probably wise to learna little more than what a standard user does about Windows and Windows server, but unless you know you are going to need it in your current or targetted employment, I wouldn’t consider it just yet.
Linux – Debian and RHEL
Also, consider that to run an OS effectivly, you should know a considerable amount about the underlying workings of it. Netowrking, security and monitoring and troubleshooting – and being familiar with these is the bare minimum that would be needed
Cloud
You don’t need to know any cloud systems at all to actually be a DevOps engineer. There isn’t actually anything that says that you have to take all of your on premise systems and put them in the cloud. It’s just that, if you start doing things the DevOps way – then the sheer utility, and conveneince, and scalability of the cloud becomes quite clear and is a natural accompaniment to it. As I have said before, AWS would be the first and best choice – it’s by far the most popular cloud, and has a huge range of services and technologies and facilities that mean you will find something useful regardless of your use case – and also will find things that you never knew even existed, and which are immediately useful.
The other useful thing about AWS is that you can get to use it, for free, for an entire year. This makes it an excellent platform for learning on, free of charge. The link for this is below. There is also excellent documentation, which although sometimes, shall we say is a little voluminous is extremely comprehensive.
AWS has a number of common technologies, and the following are links to the documentation for the most commonly encountered ones. These are the core learnings that everyone using cloud will need, more or less immediately
Whilst learning and getting up to speed on Python, I would also strongly suggest that you learn at the same time, AWS. They are very different things, and I always found learning a couple of thigs stops me from getting stale – if I am bogged down in learning how Python does classes, then switching to something else, like AWS stops be from losing interest.
Version control systems are something that let you organise and keep track of your code, and let you track all the changes that have been made throughout the development of the code base. Whilst you cannot often see how this is of use with just one person, and in truth it’s not that much use when starting out – consider how you track changes with dozens of people working on one codebase. When you have over a billion lines of code, as some large systems will do, with maybe a couple of thousand people working on a few hundred interlinked applications – you need to track these changes.
The answer is simple and it is called Git.
Git was written 20 years ago, and essentially, nothing else is used. It is renowned as a fearsomely unfreindly program, and thats true to an extent, but there is good documentation and there are good courses to learn as well.
Just remember, EVERYONE makes at least one massive Git howler and does something inappropriate with a rebase at some point in their career. Grasp the nettle firmly and own it – you will have to deal with this beast so there is no real option!
Once you have finished learning Git, have a look at the branching stratgies that are used by GitLab.. https://www.udemy.com/course/aws-lambda-a-practical-guide
Automation
If you have to do something twice in computing – write a script for it. Or better still, use a proper automation package.
Ansible is a configuration managment tool that lests you manage and configure all your servers from one place – be that have a dozem or upwards of (my person record) about 12,000. It solves the problem of manageing Windows systems by sort of pretending that they don’t exist, until recently that is… and even now it is still a Linux only tool really. Cosnidering that it is owned and developed by RedHat Linux that is hardly surprising though.
Paired with Ansible is Hashicorps Terraform. Terraform will build and maintain your environment in AWS for you from code, faster, quicker, and more repeatable than anything you can manage setting up by hand. This is the core of how DevOps works – an automated means of deployment and maintenance using resuable code modules that can be repurposed automatically.
Learning these is as simple as getting hands on with your free AWS account and running code – you have signed up for AWS yes…? If no – go do it now….
For Ansible, I have found nothing better than the published documentation – this is how it should be… https://docs.ansible.com/
Alternative packages are Chef, and Puppet, which were once suseful but have lost ground recerntly to the domination of Ansible and Terraform. Whilst I wouldnt recommend learning them in depth, unless you need to, it would be wise to at least have an understanding of them
CI/CD
This is the process of autmatically pushing the code out to the production environment. A Continously integrated system measn that when every change is checked in in Git, the code is merged into the production code base. Along the way, the Devops engineer can mandate many useful tools that run automatically and check that the code is up to quality, that it is formatted correctly, that it is properly linted, that it has no secrets plublished in the open, that it passes unit tests and function tests etc….
The tooling that does this is a CI/CD pipeline, and here there are several to choose from. The most common are perhaps Gitlab, which as you can imagine is very tightly integrated with Git, and this would be my first choice to learn. It is like so many of the tooling choices open source, and can be run at home, which is the subject of another article.
Jenkins is an older package, and one that is still common to be used. Whilst I would suggest everyone learn Gitlab, there is a lot to be said for having a good familiarity with Jenkins as well. A less common system still worth knwoing about is CircleCI.
The CI/CD pipleline can call any of a vast range of tooling and frameworks which are really too much to consider listed here, however dealing with secrets for logging into systems and also with binary objects, images, datasets etc is something that everyone will need to know. This leads into Secrets management, of which the emerging leader is Hashicorps Vault, along with Nexus and Artifactory for storing objects which cannot easily be stored in Git.
Finally and not least is Docker and the concept of containerisaation. This allows a small fast lightweight container, which holds just enough operating system for the application that it runs to operate. Containers are a form of lightweight virtualisation, and there is no better place to learn about them from the books by Nigel Poulton. The first book runs you through everything that you will ever need to know about Docker, and the second book covers the orchestration platform Kubernetes which manages all the Docker containers that you create.
Don’t be disheartened by how much there is… becoming good at anything is always a marathon not a sprint, and for something as large as DevOps, it will take a long time. But by nibbling away at it – after all even the largest meal is eaten and enjoyed one bite at a time – you will get stuck into what is one of the most fascinating pastimes and careers that I know of.
I hadn’t realised that the web site was that broken, and that I’d been sending people links to comments that were not actually there.
I’ve done some majorly hacking about of the system and dragged some stuff out from backups and rebuilt the image libraries (which is why they all have march dates now.. lets hope that faffing on the back end will stay together. It should now work… fingers crossed.)
I suppose I should write something about how to back stuff up properly as well… sigh….