Jenkins in a Container

One of the things I needed to do recently was get Jenkins running as a demo for a customer. We have some pre-built VMs to do this, but I wanted to experiment with a container.

This post covers what I did to get this running.

Getting Jenkins Working

The first stop was to download a container. A search on the Docker Hub first brought me to jenkinsci\blueocean. This is a nicer interface for Jenkins, for some things, and that’s fine. Really, I need the server and decided to try the blueocean project interface. First things first, get the image:

docker pull jenkinsci/blueocean

Next, we need to run this. I know how to run a container. Here, I glanced at the docs, which note this runs on 8080 by default.

docker run --name blueo -p 8080:8080 -p  jenkinsci/blueocean

Once things are running, you get output at the console. I left off the –d so I could see this start. This is a Linux image, so you do need WSL running on Windows. Once things were running, I popped over to the address in a browser, and saw this:

2019-07-26 12_24_27-Starting Jenkins

I’m impatient. If you wait a minute or so, you will actually see this when the browser reloads.

2019-07-26 12_24_56-Sign in [Jenkins]

This password is in the console output, in this area:

2019-07-26 09_09_00-cmd

If you don’t see it, or you ran the container as detached, this command will get the output:

docker logs blueo

Paste in the password, and you’ll get to install plugins. I just let the normal plugins install, since I wasn’t sure what I might need. I know I’ll be adding some Redgate ones, but for now, get the typical ones.

2019-07-26 12_27_50-SetupWizard [Jenkins]

Next, create a new user. If you don’t create the user, you’ll need admin and that password to log in again. Create a user; you’ll appreciate having a user. Click Save, not continue as admin. I made that mistake.

2019-07-26 12_29_00-SetupWizard [Jenkins]

Last it’s the port. We know the default, I just left this.

2019-07-26 12_29_50-SetupWizard [Jenkins]

Save, continue, and you ought to see this. Note BlueOcean isn’t the default. This is a new way of building pipelines.

2019-07-26 12_30_32-Dashboard [Jenkins]

I know I need the Redgate plugin, but I’ll drop that in a different post on a database build.

Now, I need an agent. Why? I don’t want to customize the container with SQL Server and other stuff, plus it’s Linux. I could configure things in another container, but it’s not simple. The simplest is using the SQL stuff on the host, so I’ll drop an agent there.

The Agent

I had to have java on the host, which was what I wanted to avoid, but I can’t. The container can’t see the host, and I didn’t want to do some crazy networking between containers. That ought to be my next project, but for now, I wanted to build with local resources.

One note here. You need a second
port for the container. If you didn’t start the container as below, you’ll need to stop it, remove it, and then create a new one with two ports.

docker run --name blueo -p 8080:8080 -p 50000:50000 -d  jenkinsci/blueocean

I won’t go through Java, but once it’s there, you can set up a node. The first step for me was to “manage nodes”. Click Manage Jenkins and scroll down.

2019-07-26 12_33_08-Manage Jenkins [Jenkins]

Click this, and the “Add Node” on the left menu.

2019-07-26 13_38_00-Nodes [Jenkins]

Give this a name. I chose the name of my host machine, and I want this to be permanent. Meaning, I’ll save this container as an image, so I can start it when I want Jenkins running.

2019-07-26 13_39_38-Jenkins

There looks like a lot of configuration now, but there are really only a few things to do. Don’t let all the red bother you. First, leave executors at 1. This is for my personal setup, not a team.

2019-07-26 13_40_48-Jenkins

Next, I need a place for Jenkins to work. I created a c:\Jenkins folder, then added a subfolder for the agent itself. I’ll use the subfolder as the agent location. This does look weird, but the server is in the container, so the host is really remote to the server.

2019-07-26 13_42_02-Jenkins

For the agent connection, I don’t want this running all the time. I also don’t want this as another service. You can set up Jenkins agents as Windows services, but here I’ll just let the agent connect when I start it. I do the same thing with Azure DevOps. I’ll save this config.

2019-07-26 13_43_39-Jenkins

I have two nodes, one of which isn’t running.

2019-07-26 13_44_04-Nodes [Jenkins]

If I click the Agent, I get what I need. There is a command line to run and an agent.jar file I need. First, download agent.jar and drop it in your c:\Jenkins folder. Then, copy the command line into a .cmd file as is. Put that in the same space.

2019-07-26 13_44_35-dkrSpectre [Jenkins]

I named my file runagent.cmd. When I run this, my agent will start.

2019-07-26 13_46_17-cmd - runagent

And the agent is connected and data shown:

2019-07-26 13_57_04-Nodes [Jenkins]

That’s it. Now I can build. That’s the subject of another post.

Posted in Blog | Tagged , , | 1 Comment

Demo Data for Everyone

As someone learning about DevOps, I follow a number of people, one of whom is Gene Kim. When I see him get excited about a post, I usually read it. That’s how I found this post on Demo Data as Code. It’s a short, but interesting read. I think this is actually something more people ought to implement in their environments and not just for demos.

DevOps is about reliability and repeatability, among other things, but those two are tackled with automation for a known process. We don’t want simple, silly mistakes, or even complex errors that might undermine our ability to move forward and create value. We don’t want simple errors eating up resources and time from expensive talent with unnecessary work. Part of ensuring both repeatability and reliability involves using data in our databases to evaluate our application. This isn’t necessarily for demos, though it could be used for demos.

Once of the areas that is often left out of the process is the data that we use in our building our systems. We need some data for developers, for QA, and often for demos. In all of those cases, when humans need to repeatedly look at how well the software performs, and want to re-test things, they need some consistent data. I’d also argue that the need for agility means that we need a manageable data set. I think SQL Provision from Redgate is amazing, but I still don’t want to always develop with 2TB of masked data. I certainly don’t want to demo this for customers from a laptop, and might not want to share this in the cloud.

At Redgate, we sell masking with SQL Provision, and it supports most of the process that’s outlined in the Demo Data as Code article. What it needs, however, is a small set of data that can be masked in a deterministic fashion. What I recommend to most clients is that they build a known set of test data, which could be used for demos. This can include all your edge cases and show off new features. It’s helpful for developers, testers, and salespeople, who will always have a known, useful set of data.

This can’t be a build it and forget it, much like what is emphasized in the article. This will need to be altered over time. There ought to be a process to build this dataset, likely from production data that gets sanitized. This can then be distributed through SQL Provision (or similar technology), with backups, or even as a set of scripts in your VCS. Ensure an environment can be hydrated instantly on any platform, from a developer workstation to a sales laptop to a QA server. Once you have this, everyone can work on evaluating your software from a known baseline.

And if you find the need for more data, then just add it. You have a process, so add an additional step that will cover the holes you inevitably find.

Steve Jones

Listen to the podcast at Libsyn, Stitcher or iTunes.

Posted in Editorial | Tagged , | Leave a comment

Organized Learning

I really wish I’d do something like the DBA Training plan from Brent. Honestly, I don’t have the time to do it the way he is, and certainly can’t respond to any volume of emails, but I applaud the way he’s doing this. Brent is sharing a lot of ideas and information on how to start getting a handle on your environment. As of this writing, there are 8 posts (part 8). I expect more in the future.

Becoming a DBA is often a random collection of experiences that almost everyone goes through in a different order. Maybe you started by having to recover a database with a restore. Maybe it was the need to install the server software and configure users. Maybe you find yourself more interested in writing database queries in T-SQL than methods in C# and switch over. However you learn, there are two things I know: your environment will drive your learning and there’s always more to learn.

My start as a DBA came about when I needed to install a SQL Server instance to handle a new application. It was important that I understood the general admin duties to ensure backups and manage security. At first I didn’t think much of the platform, but as I tried to troubleshoot performance issues, I started to learn about how connections are made, resources are used, and what bad T-SQL looks like. From there, I moved on to more development before coming back to the admin side later.

It would be great to have a class that teaches you to be a DBA, but really, the job has somewhat varied at each position I have held. There are some specific things that are important at every job, but the exception is often the rule as to how you ensure your environment works well. Once the system is in production, it becomes very hard to change anything, from security to code, without lots of testing and approvals. I find that often trying to work within constraints drives a lot of learning, though not often deep or varied enough to investigate all the options.

I don’t know everything about SQL Server, but I have developed two very important skills in my career. I’ve learned how to learn, by reading, researching, and practicing new skills. I feel comfortable that I can come up to advanced beginner on a topic very quickly, usually competent enough to make something work. Second, I’ve learned how to ask for help. I’m lucky in that I have lots of friends I can call on for questions in specific areas. If you don’t know someone that’s an expert, I hope you know about #sqlhelp on Twitter and the forums at SQLServerCentral. These are great places to get help on whatever is troubling you about SQL Server.

Steve Jones

Listen to the podcast at Libsyn, Stitcher or iTunes.

Posted in Editorial | Tagged | Leave a comment

Upgrades are Hard

I think the people that run StackOverflow are pretty sharp. They’ve built a well performing site, they’ve worked on some useful open source projects, and they think through their projects. They still have issues with upgrades.

Taryn Pratt, from Stack Exchange, has written a few nice posts about her experiences upgrading the Stack Overflow databases. The first one was last year, where their multiple server AG environment upgraded from SQL Server 2012 to SQL Server 2017. Just recently she published another post on their Windows Server upgrade, from 2012 to 2016.

If you’ve never done a complex upgrade, these posts are worth reading. Don’t second guess Taryn, but rather, just read as if you are following along. You have hindsight now, but in the middle of planning this, you will learn about things that might cause you issues. In the Windows upgrade, one of the interesting issues is a VM vs. physical machine issue with drivers. To me, this might be one reason to never bother with anything other than a VM, even if it’s the only one on the machine. A lightweight version of Hyper-V or Xen doesn’t eat much in the way of resources, but can provide some separation from these issues.

One other thing to note is that you really need your runbook. I constantly see people asking for a checklist for how to upgrade, and there are good general steps to follow, but your environment likely needs to have a custom runbook that covers your situation. The Stack environment is complex, but even I was surprised with 35 pages of steps and notes.

As with most plans, this one had issues when it was finally implemented. I think Mike Tyson sums it up nicely, which is why you practice your move. It’s also why we can’t necessarily upgrade every year. There’s a reason many companies still have old versions in production (Thanks, Brent Ozar).

Plan, practice, test, repeat again, and then be prepared to think on your feet. That’s if you upgrade your systems. It’s time consuming and expensive, and I can see why a lot of companies have looked at cloud services, like Managed Instances. Reducing the time and cost to change your OS and/or SQL version is something we should all be thinking about.

Steve Jones

Listen to the podcast at Libsyn, Stitcher or iTunes.

Posted in Editorial | Tagged | Leave a comment