Lots of data in RAM

I remember buying my first computer after college. At the time I was working part time and in graduate studies part time. My aging high school era Apple II wasn’t capable of handling the load, not to mention its 300baud modem wasn’t a lot of fun. I could likely have gotten a faster modem with a serial connection, but most of the modern Internet providers wanted Windows or a Macintosh OS.

I decided to just spend the money and get a computer. Being a bit of a geek, I wanted to customize things a bit and get the most bang for my buck. At that time, I paged through Computer Shopper, looking for deals on parts and trying to find the cheapest 486 CPU I could. I made various choices, one of which was the RAM. I remember thinking hard at that time about whether I could get by with 2 1MB modules or did I need to go ahead and max out the motherboard and get 4. These days I think about GB, not MB, and 2 or 4 isn’t a good number.

One of the first servers I built at one job was with 8MB chips, getting 32MB into a machine, which felt like a lot. Certainly more than I’d ever had in a computer before. I also remember helping write a PO for a Netware server that cost US$250,000, one with 256MB of RAM. It was the size of a washing machine and filled with 32MB HDDs, making it by far the largest machine I’d seen to date.

That’s a nostalgic view back to 1992, when I was working on my CNE certification. I always think about that large, very expensive machine when I see some leap of hardware. This past week, it was a DD4 Gen-Z memory module that has 256GB of RAM in a stick. That’s not only a larger scalar value than the RAM sticks I started working with, but it’s an order of magnitude more memory. Imagine 4 of these sticks giving you a terabyte of RAM in a desktop. Who among us wouldn’t want this for our desktop SQL Server development?

This level of tech isn’t ready for most of us, but it’s not that far off. 32GB laptops aren’t common, but they are easy to find. You can even get 64GB of RAM now if you want to. How long before we see 128GB+ desktop and laptop machines? I have no idea, as we seem to be in a bit of a hardware lull. Most developers run 8GB or 16GB, and it seems like this has been the standard for quite a few years.

I don’t know if we’ll start to see some vendors pushing to add more RAM. Certainly the cloud has changed things with lots of processing and storage occurring off the local system. At this point I’d expect that three years after buying my last laptop, I’d be able to get 32GB in the same form factor for the price I paid a few years ago, but that’s not the case. The machine supports 32GB, but it’s pricey, the same price it was 3 years ago.

I wonder how many of you are still power hungry, wanting more powerful machines. Or has the level of performance you get from your i7 on Windows or OSX been good enough the last few years? I think this machine works great for me, whether running VMs or containers. 16GB does the job well, and I’d look to get the same level of hardware when I replace this laptop.

Steve Jones

Posted in Editorial | Tagged , | Leave a comment

Changing Git Credentials in Windows

tl;dr For Windows when you need a different account.

I had to work with a new GitHub account recently and needed to add a separate account. I worked through that process and managed to get things to work with a Personal Access Token (PAT). Since I have 2FA (2 Factor Authentication) on my GitHub account, my user/password doesn’t ever work in the command line.

Once I was done, I wanted to change back. I had cloned a new repo, made a couple changes, and then when I went to push, I got:

2019-08-08 16_38_19-cmd

sjonesdkranch is the wrong account. I want to use my way0utwest account, but despite entering a new PAT, it wouldn’t save. Since I use SQL Source Control, this was a pain as the terminal access isn’t there. I googled around and eventually realized that my generic git credentials are stored in the Windows Credential Manager, not the Git config files.

This is in the Control Panel –> User Accounts –> Credential Manager (shown here)

2019-08-08 16_40_35-Credential Manager

I first went to the Web Credentials, as I saw two https://www.github.com entries. This was incorrect, because I don’t want the web, I want the git:github.com entry, which is under the Windows Credentials section.

You can see I have a lot of Git stuff, but I want the GitHub.com one.

2019-08-08 16_41_33-Credential Manager

I expanded this down and then clicked Edit.

2019-08-08 16_42_13-Credential Manager

This had been created by the PAT I’d entered awhile ago. I had saved the one I generated today in my password manager and entered it in this dialog.

2019-08-08 16_42_45-Edit Generic Credential

I saved it and my git push worked, both from the CLI and SQL Source Control.

2019-08-08 16_43_39-cmd

Good reminder. Windows, especially Win10, has started to save and use credentials in new ways and lots of software is integrating with it in those ways.

Posted in Blog | Tagged , , | Leave a comment

Jenkins in a Container

One of the things I needed to do recently was get Jenkins running as a demo for a customer. We have some pre-built VMs to do this, but I wanted to experiment with a container.

This post covers what I did to get this running.

Getting Jenkins Working

The first stop was to download a container. A search on the Docker Hub first brought me to jenkinsci\blueocean. This is a nicer interface for Jenkins, for some things, and that’s fine. Really, I need the server and decided to try the blueocean project interface. First things first, get the image:

docker pull jenkinsci/blueocean

Next, we need to run this. I know how to run a container. Here, I glanced at the docs, which note this runs on 8080 by default.

docker run --name blueo -p 8080:8080 -p  jenkinsci/blueocean

Once things are running, you get output at the console. I left off the –d so I could see this start. This is a Linux image, so you do need WSL running on Windows. Once things were running, I popped over to the address in a browser, and saw this:

2019-07-26 12_24_27-Starting Jenkins

I’m impatient. If you wait a minute or so, you will actually see this when the browser reloads.

2019-07-26 12_24_56-Sign in [Jenkins]

This password is in the console output, in this area:

2019-07-26 09_09_00-cmd

If you don’t see it, or you ran the container as detached, this command will get the output:

docker logs blueo

Paste in the password, and you’ll get to install plugins. I just let the normal plugins install, since I wasn’t sure what I might need. I know I’ll be adding some Redgate ones, but for now, get the typical ones.

2019-07-26 12_27_50-SetupWizard [Jenkins]

Next, create a new user. If you don’t create the user, you’ll need admin and that password to log in again. Create a user; you’ll appreciate having a user. Click Save, not continue as admin. I made that mistake.

2019-07-26 12_29_00-SetupWizard [Jenkins]

Last it’s the port. We know the default, I just left this.

2019-07-26 12_29_50-SetupWizard [Jenkins]

Save, continue, and you ought to see this. Note BlueOcean isn’t the default. This is a new way of building pipelines.

2019-07-26 12_30_32-Dashboard [Jenkins]

I know I need the Redgate plugin, but I’ll drop that in a different post on a database build.

Now, I need an agent. Why? I don’t want to customize the container with SQL Server and other stuff, plus it’s Linux. I could configure things in another container, but it’s not simple. The simplest is using the SQL stuff on the host, so I’ll drop an agent there.

The Agent

I had to have java on the host, which was what I wanted to avoid, but I can’t. The container can’t see the host, and I didn’t want to do some crazy networking between containers. That ought to be my next project, but for now, I wanted to build with local resources.

One note here. You need a second
port for the container. If you didn’t start the container as below, you’ll need to stop it, remove it, and then create a new one with two ports.

docker run --name blueo -p 8080:8080 -p 50000:50000 -d  jenkinsci/blueocean

I won’t go through Java, but once it’s there, you can set up a node. The first step for me was to “manage nodes”. Click Manage Jenkins and scroll down.

2019-07-26 12_33_08-Manage Jenkins [Jenkins]

Click this, and the “Add Node” on the left menu.

2019-07-26 13_38_00-Nodes [Jenkins]

Give this a name. I chose the name of my host machine, and I want this to be permanent. Meaning, I’ll save this container as an image, so I can start it when I want Jenkins running.

2019-07-26 13_39_38-Jenkins

There looks like a lot of configuration now, but there are really only a few things to do. Don’t let all the red bother you. First, leave executors at 1. This is for my personal setup, not a team.

2019-07-26 13_40_48-Jenkins

Next, I need a place for Jenkins to work. I created a c:\Jenkins folder, then added a subfolder for the agent itself. I’ll use the subfolder as the agent location. This does look weird, but the server is in the container, so the host is really remote to the server.

2019-07-26 13_42_02-Jenkins

For the agent connection, I don’t want this running all the time. I also don’t want this as another service. You can set up Jenkins agents as Windows services, but here I’ll just let the agent connect when I start it. I do the same thing with Azure DevOps. I’ll save this config.

2019-07-26 13_43_39-Jenkins

I have two nodes, one of which isn’t running.

2019-07-26 13_44_04-Nodes [Jenkins]

If I click the Agent, I get what I need. There is a command line to run and an agent.jar file I need. First, download agent.jar and drop it in your c:\Jenkins folder. Then, copy the command line into a .cmd file as is. Put that in the same space.

2019-07-26 13_44_35-dkrSpectre [Jenkins]

I named my file runagent.cmd. When I run this, my agent will start.

2019-07-26 13_46_17-cmd - runagent

And the agent is connected and data shown:

2019-07-26 13_57_04-Nodes [Jenkins]

That’s it. Now I can build. That’s the subject of another post.

Posted in Blog | Tagged , , | 1 Comment

Demo Data for Everyone

As someone learning about DevOps, I follow a number of people, one of whom is Gene Kim. When I see him get excited about a post, I usually read it. That’s how I found this post on Demo Data as Code. It’s a short, but interesting read. I think this is actually something more people ought to implement in their environments and not just for demos.

DevOps is about reliability and repeatability, among other things, but those two are tackled with automation for a known process. We don’t want simple, silly mistakes, or even complex errors that might undermine our ability to move forward and create value. We don’t want simple errors eating up resources and time from expensive talent with unnecessary work. Part of ensuring both repeatability and reliability involves using data in our databases to evaluate our application. This isn’t necessarily for demos, though it could be used for demos.

Once of the areas that is often left out of the process is the data that we use in our building our systems. We need some data for developers, for QA, and often for demos. In all of those cases, when humans need to repeatedly look at how well the software performs, and want to re-test things, they need some consistent data. I’d also argue that the need for agility means that we need a manageable data set. I think SQL Provision from Redgate is amazing, but I still don’t want to always develop with 2TB of masked data. I certainly don’t want to demo this for customers from a laptop, and might not want to share this in the cloud.

At Redgate, we sell masking with SQL Provision, and it supports most of the process that’s outlined in the Demo Data as Code article. What it needs, however, is a small set of data that can be masked in a deterministic fashion. What I recommend to most clients is that they build a known set of test data, which could be used for demos. This can include all your edge cases and show off new features. It’s helpful for developers, testers, and salespeople, who will always have a known, useful set of data.

This can’t be a build it and forget it, much like what is emphasized in the article. This will need to be altered over time. There ought to be a process to build this dataset, likely from production data that gets sanitized. This can then be distributed through SQL Provision (or similar technology), with backups, or even as a set of scripts in your VCS. Ensure an environment can be hydrated instantly on any platform, from a developer workstation to a sales laptop to a QA server. Once you have this, everyone can work on evaluating your software from a known baseline.

And if you find the need for more data, then just add it. You have a process, so add an additional step that will cover the holes you inevitably find.

Steve Jones

Listen to the podcast at Libsyn, Stitcher or iTunes.

Posted in Editorial | Tagged , | Leave a comment