Sysadmin by day, developer by night

Writing this up so I can remember it. And if it helps anyone else, great.

We use a Juniper VPN at work and it’s always been a bit of pain to use with my 64bit ubuntu workstations. By 13.04 I had it pretty much down, so of course Canonical would make a change to break it. The breaking change was they remove the ia32-libs package. Here’s how you do it now.

Note: My method doesn’t involve a working 64 bit solution as well. It may or may not work, I’m not sure. I don’t use java for anything else.

So start by getting the list of 32 bit libraries we need installed.


sudo apt-get install libstdc++6:i386 lib32z1 lib32ncurses5 lib32bz2-1.0 libxext6:i386 libxrender1:i386 libxtst6:i386 libxi6:i386

First, go to Oracle and download the latest 32bit version of Oracle Java 7. http://www.oracle.com/technetwork/java/javase/downloads/index.html

You want the tar.gz version.

Now, install it


sudo bash # or su or however, just be root
mkdir -p /usr/java
mv ~/Downloads/[JAVA FILE] .
tar zxvf [JAVA FILE]
ln -s [JAVA DIR CREATED] jre
update-alternatives --install /usr/bin/java java /usr/java/jre/bin/java 1

I’m mainly posting this in the hopes that Google will index it and this will be simpler for people to find. Took me way too long to find this.

When working with systems like say, nagios, which can have it’s configuration broken out into multiple files and directories, you may want to just have a directory inside your cookbook that you manage with all those files. Then simply push that directory to the nagios server with chef, when you deploy nagios with chef.

It’s actually really easy, just figuring out how to do it will take some time when you start googling. That’s mainly because the resource you use to do it has what I believe is a confusing name, remote_directory.

You might be familiar with cookbook_file, put something in the files/ subdirectory of your cookbook and then use cookbook_file to deploy it. Well, remote_directory, imho, should be called cookbook_directory. That’s what it is. It has a couple extra settings, mainly for managing file vs directory permissions. It does follow the same directory name rules as cookbook_file, ie: say you have nagios1.biz.com and nagios2.biz.com

You can have the structure

files/
host-nagios1.biz.com/
etc/
host-nagios2.biz.com/
etc/
default/
objects/

objects would have your contacts.cfg, timeperiods.cfg, etc etc. Stuff that’s global whereas the rest of the config is often pretty server specific. You can deploy this way

remote_directory “/usr/local/nagios/etc” do
source “etc”
files_owner “nagios”
files_group “nagios”
files_mode 00640 
owner “nagios”
group “nagios”
mode 00750
end

remote_directory “/usr/local/nagios/etc/objects” do
source “objects”
files_owner “nagios”
files_group “nagios”
files_mode 00640 
owner “nagios”
group “nagios”
mode 00750
end

Super simple.

I’m not looking for a job, I’m especially set right now and have so much on my plate for the next couple of years, looking for a new job would probably make me go crazy.

However, I get lots of unsolicited emails from recruiters. My Linkedin profile seems to be a part of this. Plus they find my resume from like 8 years ago, I don’t know. I’m not getting more than average, I’ve asked around. Lots of people are getting hit by recruiters.

I’ve seen the blog posts, some complain about it. Others do funny things to the recruiters along the lines of what people might do to the telemarketer calling at dinner time. Lots just ignore them.

I respond every time. I thank them. Why? Well, I never know when I might need them. I’m a lucky guy, I’ve worked for 2 companies my entire career. Over 8 years each. However, the truth is a job is like the Dread Pirate Roberts, “Good work, sleep well, I’ll most likely fire you in the morning.”.

Should that happen, well I got a whole slew of recruiters who appreciated my resume in the past enough to reach out to me. They may even recall the fact I responded with appreciation and behaved professionally. As a sysadmin, I’m supposed to have good support skills. Respond and behave in a professional manner is a requirement for good support skills.

I’m busy. Just like a lot of other people. I work hard, just like a lot of other people. Recruiters are busy and work hard too. Cold calling and contacting people can’t be the greatest job in the world. Remember they’re people too, and hey, one day they might even help you out.

Sometimes I run across a little gem like this one.

Why SysAdmin’s Can’t Code

It covers great little tips about how System Administrators can write better code. Gee, thanks.

Now let me tell you why we don’t code. We can do it, we choose not to.

First of all, if we wanted to be a programmer, we would be programmers. The amount of effort we put into doing ops well could just as easily be applied to learning to program. But, we want to do ops, not sit and write code. Ops provides us the experiences we prefer. Whether it’s infrastructure architecture or saving the day when things go kaput.

You’re right, scripting isn’t programming. We know that. We script to get stuff done. Our scripts are to make repeatable processes more efficient and less prone to operator error. Us devops guys cross the line somewhat, using tools like Chef written by programmers to provide a framework for our scripts. I don’t know about a lot of other people using Chef, but I know most of my Chef scripts are basically just Bash.

We don’t want to learn the latest IDE and develop best practices for working with a version control system with other developers. Code reviews, strategy meetings, QA reviews… what? No thanks, let the programmers get that stuff done. They have 7 hours to commit to this project today, I have 45 minutes and that’s only if all my KLO goes well.

The programmer is working on some fancy algorithm to do something amazing in the program they’re working on. I’m trying to parse 20 days of apache logs to answer a question my manager asked me for an answer to in the next 20 minutes. cat access.log | cut | grep -v | grep -v | grep….

It’s never been we can’t code. Some of us are actually pretty good at it. It’s we don’t have time to code, it’s not what we get paid for.

By now, if you’re reading this, you’ve probably heard about the recent NSA scandal. A systems administrator, who was a contractor, at the NSA recently started releasing secure information about the NSA’s practices in information gathering that could be considered unconstitutional. This post is not about discussing the specifics of that, I’m sure you can find plenty of other people talking about it.

What I want to discuss is some things that I think most systems administrators have known for a long time, that’s become obvious to more than just us now. Before an event like this, we could talk about the risks of data in the cloud and at hosted providers all we want. Most people would assume the risk is minor.

Now though might be a good time to assess what is the value of your data, and is it secure enough? My understanding is Snowden was a well paid ($200k a year) contractor living in a paradise known as Hawaii. He was willing to throw that away.

Now, maybe you hire your own Systems Administrators. You have what you believe is a rigorous hiring process and you don’t hire someone you don’t feel comfortable giving root access to. Makes sense, you’re not going to give someone the keys to all your data without trusting them, right?

Well, if you’re hosting that data in a remote data center or on a cloud, you’ve actually done just that. Sure, ok they don’t have root access to just log in to your server or image (or do they?). They still have access to the data though. In the data center, all they have to do is go grab one of your disks. If you’re doing RAID1 mirroring of all your disks, you may not even catch this ever happened if they swap it in. Unless you got monitoring down to the level of getting an alert when a single drive goes down. In the cloud, heck all the data is on their disks, they can copy what ever they want.

Much like the NSA (according the Snowden) there are policies that exist that say those employees can’t do that. So you’re trusting your hosting provider to hire people that won’t be tempted to sell your data to someone else. Those people probably make a lot less than $200k and unless you’re hosting Hawaii, aren’t living there.

Systems Administrators know the first and best layer of security is the physical security of the disks. That’s why we put locks on the doors. A lot of us put cameras up. Once you’ve put your data outside of that locked door, you’ve suddenly opened up your circle of trust very wide. In most cases the only thing keeping people from doing things with that data you don’t want them to do is their word. Something to think about.

Recently I read a sad tale about a startup by a couple of designers that suffered a complete loss of data due to hard drive failure and no backups. You can read the story here.

http://blog.method.ac/announcements/our-servers-hard-drive-is-dead-we-didnt-have-a-backup/

The author obviously wasn’t happy handling the infrastructure and admitted he didn’t have a lot of experience in this area.

“In practice it was very painful. I had to compile nginx, install SSL certificates, open ports, tweak config files all over the place. I did some math and I found out that using a third party email delivery would be more expensive than the server itself, so I set up the infrastructure for delivering and receiving emails. I was overwhelmed. Just so much stuff to learn.”

They also appear to not have had time to focus on operations.

“The fact that I didn’t have backups was in the back of my mind the entire time. When we soft launched I thought I would have enough time to learn how to properly back up the server, but in practice that time never came.’

But what really strikes me about the whole thing is that after a failure to manage their infrastructure, they appear to be more focused on hiring a programmer than someone with operations experience.

“What comes next

We need to get programming talent on-board. We believe in owning your product and make it happen with your own means”.

Honestly, in 2013, this isn’t too shocking. With Heroku, Amazon and Rackspace I think a lot of people with great startup ideas have the impression ops isn’t as necessary anymore. Supporters of this idea will likely point out in this case the designers specifically chose not to use those services and ran a dedicated server. To some extent they’re right. But even in these environments you still need to manage your data on some basic operations principles.

Are you relying on Heroku to back up your data? Are pushing snapshot backups to s3? Great. Are you testing that a restore actually works? You’ve made some changes to the code of your application, does it work with the revision you had running last week? If a subpoena for email of one of your employees happens, do you have the procedures in place to choose to comply or not? Is there a single account that can be accessed which can delete all your data and backups?

My suggestion to anyone starting a business is to at least get a consultant to help manage their infrastructure (virtual or otherwise) and compliance before they end up in scenarios like the above.

This post was originally going to be about how happy I was with Kubuntu 13.04. However, I encountered a problem with kwallet that eventually made me look for alternatives.

I read about gnome 3.8 and thought to myself that I hadn’t looked at gnome in a long time. I installed v3.8 via extra repositories on my Kubuntu 13.04 laptop and played with it a bit. I liked it, so I downloaded gnome ubuntu. This has version 3.6 on it and I’ve decided to stick with it for a while rather than installing the 3.8 repositories.

Over all, I’m pretty impressed. I knew there was some things I was going to like from the start. Being back to using a file manager running on top of gvfs of course was going to make live easier when dealing with samba files at work.

The default desktop experience is nice. Very nice. So nice in fact I’m still using the default background. Pretty neat how it changes through out the day. When I fire up a new KDE desktop I usually have about 5 minutes of customizing widgets and panels ahead of me.

Big surprise is Evolution. That time I normally spend tuning a KDE desktop can be spent installing add-ons like evolution-ews. I don’t need davmail anymore. Evolution still isn’t the prettiest application, but man does it ever just work*. I added my google calendar and now when I click on that time display in the top bar I really do have a nice overview of my schedule.

* experienced an evolution crash shortly after typing that. uh?

Multi-display was simple to set up. I did change one keyboard shortcut, activities overview was changed to Super-A. That’s what it was on the 3.8 desktop I played with for a bit. Easier to remember. Speaking of which, the activities overview screen is great. In other desktops I’d have a shortcut I’d use to launch applications, ie: ALT-F2 in KDE or Ctrl-` in e17. Then there would be the various expose shortcuts. In gnome it’s all the same thing. I get a nice expose and the search menu is focused to find applications. Virtual desktop handling is on demand. Sure I’m going to miss the cube, but really, not that much. I liked this virtual desktop with Cinnamon.

I only had to install icedtea-7-plugin to get my juniper VPN working. Note though, that’s because I installed a 32bit version of Linux. I gave up on trying to get it to work with 64bit a long time ago.

It’s not all unicorns and rainbows (sorry, both my daughters are on a My Little Pony kick).

I can’t get HDMI audio to work out my display port. Extensive googling has me hoping that there is a kernel patch coming which may fix this.

Evolution did crash on me when I got a ton of alerts all at once while writing this. Sysadmins can get a whole lot of emails at once when something goes wrong.

It’s sluggish sometimes. I click on the time in the top panel and sometimes there is a noticeable delay before the pull down occurs.

What’s with the 2 online accounts in the system settings?

I can’t get guake to show up on my monitor. It’s stuck to the laptop panel. I liked how I could configure yuake to open on which ever display the mouse cursor was on.

Over all, especially with the kwallet bug it’s more usable that Kubuntu 13.04 right now. There is still a bit of polish missing to the experience though. Hopefully a lot of this will be improved as 3.8 rolls out and the 13.04 version of Ubuntu matures. I like the look and feel.

I did notice that pretty much everything is working. Birghtness and volume keys. The touchpad… I haven’t installed the sputnik kernel for my XPS Ultrabook. I may have to check out some other Gnome distros and see if everything just works with the 3.8 kernel.

This is probably obvious to everyone. In fact it was obvious right now when I did it. However it wasn’t obvious the first few times I actually went looking for it.

So, say you deployed your Rackspace cloud machine on Ubuntu 11.10 way back when. You want to upgrade to 12.04 and don’t want to do it in place. However you do want to keep your ip.

In the control panel for the cloud server, use the rebuild button. I guess the first few times I saw that button I presumed it would rebuild the existing machine as is. No, it allows you to change the image, of course it will still wipe out all your existing data.

I don’t know ruby. I think got the Ruby Cookbook a while back and flipped through some pages. Looks like a neat language, I really liked some of the syntax. It seemed like something that you could really progress to from Perl, which I imagine a lot of sysadmins did. I’m saying this because I want it clear I’m in no way suggesting Ruby is something you shouldn’t learn. I never did simply because I ended up learning python for both professional and personal reasons.

Ah I said python, no I’m not going to talk about using python and Chef, sorry. What I am going to talk about is using Bash and Chef.

Quick bit about me. I’ve been working on some version of *nix since the mid-90’s. I got my first job in the late 90’s as a help desk guy because I demonstrated I could move my way around a unix (DEC Alpa) prompt. I learned that having dial up unix shell accounts in my BBS days for IRC. I wrote my first shell script the third day on the job, automating some search engine indexing. That was an ugly search engine, but way off topic.

For the past couple months I’m been working on getting configuration management going in my current environment. In the 6+ years I’ve been here we’ve grown. We have a giant virtual infrastructure and have a large mix of both Redhat Linux and Microsoft Windows servers. It was because of this mix I ended up going for Chef over other options. It seemed to support both Linux and MS the best.

So now that I’ve been banging away at Chef for a while, I’ve realized I still haven’t learned any ruby. In the past I’ve always scripted things with bash until bash didn’t cut it, then I’d use perl or python. I think for most sysadmins with my tenure, this is the way we do things. Chef for me has been a way to better organize those scripts. Really, most of my recipes are simply a collection of bash scripts.

Ok, for example, on our database server builds we create a lot of partitions. Those partitions then have very specific permissions. Now, using Chef you could do something like this:


directory "/opt/oracle" do
user "oracle"
group "oracle"
mode "0750"
end

directory “/dbbackup” do
user “oracle”
group “oracle”
mode “0750”
end

directory “/dbarchive” do
user “oracle”
group “oracle”
mode “0750”
end

directory “/dbarchive/mysql” do
user “mysql”
group “mysql”
mode “0750”
end

directory “/ua1/oradata” do
user “oracle”
group “oracle”
mode “0750”
end

directory “/ua2/oradata” do
user “oracle”
group “oracle”
mode “0750”
end

There’s actually a whole lot more directories and mount points than that. Since I was porting over existing build docs to chef though, I did it the quick way.


bash "setup_directories" do
code <<-EOH
mkdir /opt/oracle; chown oracle.oracle /opt/oracle; chmod 750 /opt/oracle;
mkdir /dbarchive; chown oracle.oracle /opt/oracle; chmod 750 /opt/oracle;
mkdir /dbbackup; chown oracle.oracle /opt/oracle; chmod 750 /opt/oracle;
mkdir /dbarchive/mysql; chown mysql.mysql. /opt/oracle; chmod 750 /opt/oracle;
mkdir /ua1/oradata; chown oracle.oracle /opt/oracle; chmod 750 /opt/oracle;
mkdir /ua2/oradata; chown oracle.oracle /opt/oracle; chmod 750 /opt/oracle;
EOH

Over all I find embedding some blocks as bash scripts is easier to read as well.

Now, I’m not suggesting that you shouldn’t learn ruby. I’m not suggesting you shouldn’t learn a lot of the other chef commands you can use. I use template, file, service and package a lot. What I am saying is not knowing ruby is not a good reason to avoid using chef. Even if it’s just a wrapper for your existing shell scripts it’s a step up in better organization, the guy who inherits it down the road will appreciate it.

In my next post I’ll get into how using recipes to break up big bash scripts into small functional chunks can be helpful.

GNU Screen and Tmux are terminal multiplexers. Yea, reading that when I first started using them didn’t make a whole lot of sense to me either. Basically imagine if you were in console only mode, no X, and still having the capability of tabs like your current favorite terminal app? That’s what a terminal multiplexer provides. Even better, you can split the view into separate panes, so they provide even more functionality when you are using a terminal app in X.

Another great benefit of a terminal multiplexer is you can detach from a session. So if you have a process that will run for hours you can log in, start the multiplexer and kick off the process. Then you can simply detach the session and log off. This is great for systems administrators who may have to log into a server and kick off a long process. This allows them to start it and log off, returning later to check on it. For example, kick off that report process that takes 2 hours to run at 3:30PM. Go home and check it in the morning by reattaching to the session. No need to leave it running on your workstation at work over night.

When you reattach to a session, the full history of that session is still available to you. You can scroll back up and see the scroll buffer of that session, even if you connect from another computer.

Another cool thing you can do with these is share a session with someone else. Think screen sharing, just for your console session.

Screen is the older of the two, it’s been around for a long time and you’ll find it in most package repositories for various flavors of Linux. Yes, ‘yum install screen’ works by default on Redhat (and I presume CentOS).

When I first started hearing about terminal multiplexers, it was people saying how great Tmux is compared to screen. I’d used screen here and there in the past, mainly for the purpose described above of being able to disconnect from a session while letting the process it’s running continue. It wasn’t until I was playing with e17 as a desktop and wanted to use Terminology as a terminal client that I really dove into using Tmux. Terminology didn’t support tabs, so I started using Tmux for that purpose alone.

Once I started using Tmux more, I found out something else that’s cool. The default key bindings are different than Screen. With Screen you use ctrl-a to run commands, with Tmux it’s ctrl-b. A lot of Tmux guides include telling you how to convert it to use ctrl-a as well. Don’t do this.

Tmux is, in my opinion, the more powerful of the two. The first thing I do when I open a terminal on my desktop is start it. This is allows me to split the terminal window when I want to, or screen new screens (like new tabs) on the fly. I like the vim like functionality it has for navigating the back scroll buffer. Tmux is great on the desktop.

Screen has it’s use too though. Install it on the servers you connect to. Then you can take advantage of session durability and sharing. If you need to create multiple console windows inside that server connection, then Screen allows you do to that as well.

This is my preferred set up now. Tmux on the desktop and Screen being available on the server when I need to use it. Using both makes for an invaluable tool in your Systems Administrator toolbox.

Technorati Profile