SQL Server Telemetry

One of the things that I’ve seen blogged about and noted in the news over the last year is the amount of data being collected by various software systems. In particular, I’ve seen numerous complaints and concerns over what data Microsoft collects with its platforms. Perhaps this is because Microsoft is the largest vendor, and plenty of other software collects usage information, primarily to determine if features are being used or working correctly. I think much of this is overblown, but I can understand having sensitivity about our computer usage, especially for home operating systems.

Microsoft conducted an extensive review and published data about what is being collected and why. Windows 10 and other products have undergone review and must comply with the published policies. There’s even a Microsoft Privacy site to address concerns and explain things. It’s an easy to read policy that explains what  Microsoft is collectin, if you’re connected to the Internet (if you’re not, don’t worry, no bloating files). That’s a huge step forward in an area that is evolving and something I wouldn’t have expected to see in the past. I am glad Microsoft is making strides here, even if I may not agree with specific items in their policies. I do think that most of the companies collecting this data are doing so to improve the products, not spy on customers. I’m sure some do, but likely smaller organizations with some sort of criminal intent.

As data becomes more important, telemetry for software is potentially a data leakage vector where private, personal, or customer information might be leaked. Certainly as more speech and other customized services are used in businesses, I worry about what data could be accidentally disclosed. After all, it’s not that super powerful smart phone that is actually converting audio to text in many cases; it’s a computer somewhere in the vendor’s cloud.

With databases, this has also been a concern from some people. I’ve seen the Customer Experience Improvement Program for years and usually opted in. I’m rarely doing something sensitive and I hope that with more data, Microsoft improves the platform. That’s the stated goal, and I’d seen them talk about this a few times. The SQL Server has moved forward and published an explicit policy that spells out what and when data is collected. It was actually just updated recently and all new versions of the platform must provide this information (if anything is different) and adhere to what they disclose. There is a chance that user data could leak into a crash dump, though users have the opportunity to review data before it is sent to Microsoft. I’m not sure how many will, but they have the chance.

I would like to be sure that anything sent is secured, and perhaps have an easy way to audit the data sent in a session, but I know this entire process is evolving. One important item to note is that customers can opt-out of data collection for any paid for versions of SQL Server. That’s entirely fair, but if you have regulatory concerns, you should be sure that you don’t restore production data to development machines. You shouldn’t anyway, but just an FYI.

Usage data is going to be a part of the future of software, especially as more “services” integrate into what we think of as software. Those services aren’t always going to be under our control and certainly part of the reason many of these are inexpensive is that additional data is captured about the people using the software. I hope all companies publish and adhere to some sort of privacy statement, and maybe that’s a good start. Rather than any regulation on specific privacy that must exist, start forcing companies to publish and stick to whatever policy they choose.

Steve Jones

 

Posted in Editorial | Tagged | Leave a comment

Liable for Data Loss

When I first installed Windows 7, I was thrilled. Finally, Microsoft had slimmed down and improved the OS performance rather than continuing to bloat it larger. After Vista, Windows 7 was a welcome change. Windows 8 was different, but for me as a desktop user, it wasn’t much different. I moved to Windows 10 as a beta user and thought it performed well. A little slower than 7, but overall a good move. I was glad to get the upgrade notice on a second machine, but it was annoying. Trying to easily delay or avoid the change in the middle of some travel was hard. I certainly could sympathize with the users that complained they didn’t want the upgrade and couldn’t easily avoid it. I’m glad Microsoft changed this process a bit.

There were people that accidentally, or felt forced, to upgrade. Among those, some of them lost data and decided to sue Microsoft. Let’s leave aside the Windows upgrade process, Microsoft’s decision, and the merits of this particular case. Those are separate issues from the one I want to discuss, which is the liability for data loss. At the core of the lawsuit, the time and information that people have lost is an issue that few of us have had to deal with in our careers. At least, most of us haven’t had to worry we are liable for the issues our software might cause.

Are we moving to a place where a person, or more likely a company, is going to be held liable for the data loss from upgrades or patches? There is the ability of customers to initiate legal actions, but strong EULAs and prior legal decisions seem to indicate that much of the liability resides with customers and vendors aren’t at fault. Is that a good thing? I’m not sure, but I do think that as data becomes more important and is used to justify decisions or drive business actions, there will be a push to ensure that anyone performing data changes with their software during patches and upgrades is liable for issues.

I’m surprised we haven’t seen more of accountability from software firms to date, but I think much of the legal issues have been settled without much fanfare and strong non disclosure agreements. I’m not sure this is the best solution for anyone, as to force some improvement and better quality for software, we need to take better care of our data. I don’t want us to move slower with software development or deployment, but I do want quality to improve.

Steve Jones

Posted in Editorial | Tagged | Leave a comment

Agent Phases in VSTS Deployment

VSTS (Visual Studio Team Services) continues to grow and change over time. It seems every few months I see some new features and changes, which are good and bad. Good in that usually things improve. Bad in that I get confused and lost at times, and sometimes have to re-acquaint myself with the system.

One of the changes I noticed recently was the Deployment phases, which come as Server and Agent phases. I won’t cover server here, but I wanted to make a quick note on Agent Phases because I think they’re handy and a nice packaging concept.

Agents

There are agents that run for VSTS to do work. I’ve written about how to download an agent so tasks run inside of your network (on a server or client).  This post will look at the other side, how do I control an agent from the release process.

Once I get a new release definition, I’ll see something like this:

2017-04-02 11_18_05-New Empty Definition 02-Apr - Visual Studio Team Services

This is a blank deployment definition. To the left I see the environment I’m working with, in this case, just the generic Environment 1 name. For each environment, I can set up tasks. On the right I have the Add Tasks button, but also the “Run on Agent” item. This is the container where I will add tasks.

I used to just add tasks with VSTS 2015 here, expecting that everything would run on an agent. VSTS has hosted agents that contain a set list of software, and also downloadable agents that you run on your servers or workstations. Things changed recently to include phases.

Phases

The phases are place of execution, which allow a series of tasks to be grouped together. VSTS and TFS 2017 give you two choices. We now have Agent or Server phases. The difference between these is that I can choose where items execute, on an agent or on the server (VSTS or TFS). In addition, I can set these to execute in parallel if need be.

If I click the down arrow next to the Add Tasks, I’ll get this dialog. This allows me to set up a process that can run on the VSTS server, or one that can run on the Agent.

2017-04-02 11_18_25-New Empty Definition 02-Apr - Visual Studio Team Services

I haven’t played with server phases, but when I add an agent phase, this gives me a section under which I add tasks. Note that when I click the “Agent phase” item, I get a new blade to the right. Note that the pool for these agents is the default.

2017-04-02 11_18_43-New Empty Definition 02-Apr - Visual Studio Team Services

This threw me early on because I have a separate pool of agents for mobile development. As I upgraded some deployment definitions, I couldn’t figure out why I didn’t have any agents available to execute my tasks. It turns out I needed to change the agent pool here.

There are options here, but I haven’t really played with parallelism or demands since I typically deploy to one system at a time.

Once I’ve selected an agent, I can click Add Tasks and then add a series of tasks to that agent. If you look below, I have added a couple tasks to each of the two agent phases. These will run one phase, then the other.

2017-04-02 11_19_47-New Empty Definition 02-Apr - Visual Studio Team Services

Each of these phases is independent, and as of the time I wrote this, there’s no way to copy a phase and duplicate it.

I’ve used this to upgrade my release definitions by adding new tasks, and then copying settings between one phase and the other. This is a manual, copy/paste operation, but I can easily see if I’ve duplicated everything by going back and forth quickly.

2017-04-02 16_32_43-ST Pipeline_Mobile - Visual Studio Team Services

Once I’m done, I can just click the “x” to the right of the agent phase and delete the entire phase. I have to confirm this action, and then I’ve removed a complete phase.

Right now I haven’t see any benefit outside of the upgrade scenario, but that’s useful. I will duplicate my tasks, disable all the ones in one phase and then see if the other one runs. If it doesn’t, I can always enable the old tasks and disable the new ones.

I expect as I look to perform more complex deployments I will use these to logically separate things, and also perhaps use the server to do some things when I don’t need an agent.

Posted in Blog | Tagged , , , | Leave a comment

Making VSTS Deployment Changes to Databases without Breaking Your Application

One of the things that I do a lot is demo changes to databases with CI/CD in a DevOps fashion. However, I also want to make some application changes to my sample app without breaking things. As a result, I’ve built a few ideas that work well for both situations. I found recently these ideas can help me when I need to actually upgrade or change my CI/CD pipeline.

Note: DevOps isn’t a thing, it’s a set of principles, ideas, and culture that produces results. I will endeavor to ensure I call out the specific principles I use to adhere to DevOps ideas. In this case, automation and CD.

Release On Your Schedule

One of the principles of DevOps is that we use feedback loops to ensure information moves from right (operations) to left (development).  One way of doing this is to release more often, though this isn’t required. What we really mean is release on your schedule, when you want, and allow developers to get feedback on their work quickly.

If a developer takes a month to build a feature, they don’t need hourly or daily releases. They need a release after the month is over (assuming testing, review, etc. has taken place). If releases occur on the 5th of the month, and the developer finishes on the 6th, they must wait a month before they get feedback. What I want to ensure is that we can release on the 5th, and also on the 6th if I need to.

Upgrading my VSTS Pipeline

I wrote in another post that I planned on upgrading my Redgate DLM Automation tasks in VSTS. I was doing this on a trip to a conference, and I didn’t want to start making the demo changes I’d make as I’d have to undo them, and it’s possible that I’d forget to clean something up. I hate making that silly mistakes, so I needed a way of testing my build and release without breaking demos.

I decided to use a technique that I use in presentations when I’m talking process and not code. In those cases, what I deploy doesn’t matter, but if I change random objects, sometimes I break the application using the database. As you can see here, I needed to do a bunch of tests, and repeat some.

2017-04-01 11_52_03-ST Pipeline_Mobile - Visual Studio Team Services

 

Deploy Often

When I talk with Redgate clients, and they are starting to get comfortable with the idea of deploying on their own schedule, they will sometimes make innocuous changes that trigger a deployment they can test, without breaking things. For example, a user might take this stored procedure (partially shown):

ALTER PROCEDURE [dbo].[GetEmailErrorsByDay]
  @date DATE = null
/*
Description:

Changes:
Date       Who         Notes
———- —         —————————————————
2/14/2017  WAY0UTWESTVAIO\way0u   
*/
AS
BEGIN

IF @date IS NULL
  SELECT @date = DATEADD(DAY, -1, GETDATE())

  SELECT Errcount = COUNT(*)
   FROM dbo.EmailErrorLog
   WHERE EmailDate = @date

 

and make a small change. Perhaps they add SET NOCOUNT ON, or maybe they’ll add a comment. I’ve even see someone add this code:

AS

BEGIN

SELECT 2 = 2

These aren’t big changes, and certainly choose which stored procedure to change (one that isn’t used a lot or is critical). These are changes to test your process, gates, approvals, deployments, etc.

I decided to try something else. I tend to write this code when I want to test changes, since I can be additive with a procedure like this:

CREATE PROCEDURE Get7

as

SELECT 7

Or I can modify things with this:

ALTER PROCEDURE Get7

  @plus int

as

SELECT 7 + @plus

In either case, I can see if my changes go through. Since I often deploy to multiple environments (QA, Staging, UAT, production, etc.), and I don’t always have deployments go all the way through (see my image above), I will usually end up creating Get7, Get8, Get9 as subsequent procedures. This way I can continue to commit new changes to the VCS, get new builds, and get new releases.

I can also do this with tables. My favorite is MyTable, MyTable2, etc. I usually just have an integer MyID column, but I can add other ones to test the ALTER process. I can even use this to test data movement, static (reference/lookup) data, or anything else to do with tables.

Eventually (hopefully) I get clean deployments all the way through to all environments.

Cleanup

I sometimes get collissions, where a test will return the error that “Get10 exists”, and I’ll move on to Get11. However, I don’t want to leave those objects in all environments. After all, likely I’ll find a way to improve things in the future and I’ll want to repeat this testing.

This usually is one last deployment for me. I’ll delete these objects in dev, and then deploy the changes all the way through to production. This allows me to test my checks to prevent data loss, if I have any. Including if approvers actually read scripts Winking smile

Posted in Blog | Tagged , , , , | Leave a comment

More Power in Your BI

I’ve seen lots of data visualization tools over the years. Cognos, Microstrategy, ProClarity, and more. While I haven’t been a BI person for most of my career, in many of my positions, someone has wanted to try some new tool and eventually I’ll get involved somewhat. I remember first seeing the OLAP cube browser in SQL Server 7, and showing my boss a quick demo. While it wasn’t something that you could let an end user have, it also got my boss excited, and they started a project as I was leaving to implement some OLAP for the sales department.

When I first saw Tableau years ago (2008?) at TechEd, it was the first tool I’d seen that was really slick for the end user. I expected they’d grow quickly, and they have. Many people have looked at the Tableau tools as the standard for data visualization. For years nothing looked close, and as much as I appreciated Microsoft moving PowerPiviot and other tools to Excel, they weren’t as nice to use as the graphical tools.

That changed with Power BI. When I first saw this tool, instantly I could start to see the power of the tool. Even altering and changing a tool had an ease and power that I hadn’t seen since the Tableau tools. The initial release had lots of limitations and was just on the web. It was slow, and refreshes from remote data weren’t great. There were security limitations, and I wasn’t sure that the vision for the product made sense from Microsoft, even though I could see tremendous potential.

Things have improved, and I am somewhat amazed with the power of Power BI. A simple sales report is interactive. I can click on something intuitively and the values I see will adjust to focus on that item. What’s more, other graphs on the screen adjust. However, I don’t have to stick with simple graphs, I can add better imagry, such as this wine report, or this airline maintenance report that drives work. I was especially impressed with the analysis of Stephan Curry (which won the contest). Some of those reports I couldn’t imagine trying to build with other tools, especially SSRS.

Power BI is a great tool, and if you need to provide visualizations for end users, I’d urge you to take a look. The Power BI Desktop tool is great for modeling and working with data. I find myself using that for quick looks at data, building a graph in a way that’s easier than in Excel. I went through Microsoft’s EdX course, and was somewhat amazed to see the vast capabilities available in the product. Guy in a Cube, Adam Saxton, has a YouTube channel where he and Patrick LeBlanc (@PatrickDBA) bring you constant training and tips on how to get the most out of the tool.

What’s more, I keep finding more and more Power BI blogs and posts that I can add to Database Weekly every week. So many people are experimenting and finding ways to better analyze data with Power BI. This week we have a continent slicer, data privacy, collaboration, and more. With montly releases, I find that people are constantly digging into the possibilities and helping you learn with them. There are even developers building custom visuals that you can download. There’s even one for acquarium lovers. If you want to learn about these, Devin Knight blogs regularly about custom visuals and we include many of those links here and on SQLServerCentral

Power Bi is a great way to empower your users, reduce your reporting load, and make everyone happier. Power BI is a great way to let users experiment with data and actually decide what reports are worth investing in more with IT resources to perhaps optimize the performance of certain visuals. Give it a try to day, especially the desktop version, and see how you can easily start to examine data that might be important to you. Even the SQLDBAWithaBeard finds Power BI helpful.

Steve Jones

Posted in Editorial | Tagged , | Leave a comment

AI Helpers or Replacements

It’s interesting to look at the data business and how companies view DBAs, database developers, BI developers, data scientists, and more. There are some companies that really see value in our services, and I’m grateful for that. I’ve been gainfully employed for well over two decades to work with data. There isn’t much standardization in our jobs or what we’re expected to do, but I’ve grown comfortable with that. That’s one of the reasons the people working with me, my coworkers, are more important than the work. The work is the work.

As our systems become more advanced, there is concern over how much less some of our skills might be needed with new AI type systems. Will the move to smarter systems mean there will be more opportunity for us? Or less? Certainly in the long term there might be less jobs if systems become really capable, but in the short term I don’t think so. As much as Microsoft has improved SQL Server, and they’ve done great things with easier HA configs, the Query Store, Adaptive Query Processing (coming) and more, they aren’t replacing many of us. Maybe a few, but I think there’s still lots of technical work.

Darmesh Shah wrote a nice piece on AI and how it will help many of us in our jobs, providing the easy information and guidance for us to focus our skills. He sees bots as helpers, which to me presents new opportunities to interact with and work with customers and data. We will find ourselves more capable with help from AI, not replaced. As we get better machine learning or other adaptive algorithms, we’ll actually find new ways to work with data. This should provide us with new opportunities and new types of jobs that we might grow into. Plenty of people want to dismiss the data science jobs as a popular area where anyone can claim those skills if they know a statistical function and can query data in R, but there are real jobs in those areas, and there are opportunities in new companies that might not have existed in years past.

There are going to be some amazing new ways that data and more intelligent algorithms will help us see our world in the future. There will also be scary ones, and many we don’t know if we can trust. Somewhere in there, the world will become very, very interesting for those of us that work with data on a daily basis, looking for new ways to extract information from all the bits and bytes that we store. Once again, this is something I do look forward to as a data professional.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.6MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

The Value of Your Personal Data

Years ago I read Being Digital by Nicholas Negroponte of MIT. In the book, he relates a story about crossing the border from the US into Canada and a Border Patrol agent asking the value of his laptop. He said millions, not because of the hardware, but because of the data. I thought that was interesting, and I somewhat agree. The data I have on my machines, or really now, in cloud services, is more valuable than the hardware. The hardware is barely worth anything to me, including the time it takes to reinstall my applications, because I really treat my laptop as a piece of cattle. I could drop it, render it completely unusable and I’d be back up, running, and productive with new hardware in less than 4 hours.

Then I saw this article on ransoming data for iCloud users, where the request is for US$75,000. This is for Apple itself and not individuals, but what if it were for each user? What would I pay to avoid losing all the data on my mobile device?

It certainly wouldn’t be $75,000 despite the fact I think some of the data I have is precious. I love the pictures and video of my family most of all, but if I lost some of them, maybe all of the digital storage, would I just pay a ransom? Probably not. What about all the code in a VCS for my company? Would we pay a ransom for that? Likely we would, though perhaps not. We might think about reconstructing the code, especially if we were a small company.

Ultimately I think there isn’t a lot of money to be made with individuals’ data. Too many of us older folks remember when we did lose physical objects and they were just gone. The idea of losing things is painful, but not inconceivable. For younger people, it seems that much of the data we produce and consume is transient and isn’t necessarily that valuable to us. My kids love Snapchat for the ability to create and lose memories. I dislike it for the same reason.

Much of the data we personally have can be backed up, and possibly recreated if it isn’t. Many of our digital records have the characteristic that while we may have a copy, often some business or organization has another copy. Requesting new data is easy, and while it might cost some time and money, it may be preferable to the idea of paying a ransom, and potentially having to continue to pay again in the future with some virus still on our systems.

I don’t quite know what to think about ransomware and how this might evolve. I suspect that this will always be a problem in some way, just as hardware failures will likely plague us forever. The only solution I have is to create backups regularly that contain versions, and can be restored to separate, clean devices. I’ve gotten away from my own personal offline backups, mostly because of data size, but I do continue to try and keep at least 2 or 3 backups going to different locations and with different services because my data is at least that important.

Steve Jones

Posted in Editorial | Tagged | Leave a comment

Does Speed Compromise Quality?

One of the parts of DevOps that is often hyped is the speed and frequency of releases. Starting with Flickr and their 10 deployments a day, to Etsy deploying 50 times a day, we’ve seen companies showcasing their deployment frequency. Amazon has reported they deploy code every 11.7 seconds on average. That seems crazy, but with so many applications and lots of developers, not to mention each change being smaller, perhaps it’s not completely crazy. With the forum upgrade here at SQLServerCentral, we had two developers (with occasional other sets of eyes reviewing code changes), and while we were bug fixing, we deployed multiple times per day.

Is that a good idea? Does a rapid set of changes mean that quality is lower and more bugs are released? It certainly can. In fact, if you’re a development shop that struggles with releases and code quality, producing software faster is not going to help you. In fact, if management pressures you to adopt DevOps, and deliver code faster without culture change, without implementing automated testing, including for your database code, and using automated scripts, tools, or something to deploy software, then you are going to get more bugs out faster. You’ll still get to change direction quicker if you find you’re building the wrong software, but you’ll still end up becoming more inefficient because of bugs (And technical debt).

There’s a fantastic video (long) about refactoring code in two minutes. A bit of an oxymoron since the presentation is nearly two hours long, but the video is from a real project. However, their approach is that good unit testing allows them to refactor code, to change things, without introducing bugs. That’s a big part of the #DevOps philosophy. I always note in my DevOps presentations that if you can’t implement unit testing, meaning you won’t bother, then you don’t get much benefit from CI, CD, or any DevOps ideas. Tests protect you from yourself (and others).

In many of the DevOps reports, companies that release faster report fewer bugs and less downtime. Since Amazon has increased their speed, they have 75% fewer outages across the last decade, 90% fewer time down, and many, many fewer deployments causing issues. Turbotax made over 100 production changes during tax season and increased their conversion rates. The State of DevOps reports bear this out (2016 here). Thousands of responses show that speed doesn’t cause more bugs.

Because they work differently.

If your management won’t let you change the way you work, if you don’t implement automated unit tests (and other types of tests), if you don’t take advantage of version control, if you don’t ensure every change is scripted, then you won’t work differently, and speed will bring bugs.

You can do better. Your company can do better. Will they?

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.2MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | 2 Comments

T-SQL Tuesday #89–Changing Times

tsqltuesdayThis month the invitation is from Koen Verbeeck and it has to do with the cloud changes coming to the data world, especially SQL Server. That’s an interesting topic, not necessarily technical, but it is an interesting one to think about.

I’m going to be a little short because life is busy, but I think this is an interesting item to keep an eye on for many people. Not necessarily at your current position, but what if you need to move on?

If you’re interested, I host all the T-SQL Tuesday topics at tsqltuesday.com.

The Cloud is Changing Things

I sat in a talk at SQL Bits from Conor Cunningham of Microsoft. He’s one of the principal architects of SQL Server and Azure SQL. In the talk he talked about some interesting ideas in how SQL Server engineering has changed in the last decade. Victoria Holt wrote a short piece on some of the things Conor talked about.

There are a couple interesting things that the cloud is enabling. First, Microsoft runs their cloud without any Ops team, really having developers be responsible for things in production. This is 1.7mm databases, without any DBAs. Why? They gather lots of data, so they learn when things are broken, unstable, or problematic. They do this with the 600TB of telemetry they gather every day.

Of course, you and I won’t have that much information, but the cloud does enable Microsoft to think about how to make SQL Server more stable, and also how to add automation capabilities into the product. We haven’t seen much of this change in current versions, but the Query Store is the start of one thing, and Adaptive Query Processing (coming in v.Next) is another. I wouldn’t be surprised to see more, and that means our jobs as DBAs will change.

I think there will be less, check, configure, verify, maybe even some less tuning work for DBAs. There will always be developer needs, especially with more complex reporting, visualizations, and just understanding large data sets. There will also constantly be the need to write better SQL as the optimizer can only do so much with bad queries.

The cloud interests me and excites me. There are issues, concerns, and challenges. However, I also find working with Azure through Powershell, being able to access different services from various places, keeping some data there (non PII) and avoiding the need to manage infrastructure to be key.

I don’t know if it will happen, but I would hope at some point that the Azure cloud, the AWS cloud, the Google cloud would license their service, or even allow others to resell and manage portions to encourage competition and give us some choice in who we might choose to deal with. If so, they I could see more and more companies just considering moves to the cloud for more data, especially when there could be different levels of service and protection for different needs.

The cloud is changing things, even if you aren’t in the cloud. That can be opportunity if you take advantage of it.

Posted in Blog | Tagged , , | 1 Comment

Upgrading VSTS Redgate Build Tasks

I’ve been putting it off, but in prepping for SQL Bits demos, I decided this was a good time to just upgrade my original build and release tasks on VSTS from v1 to v2 for the Redgate tasks.

The first step was to go into the marketplace and find the new tasks. If you browse the marketplace (click the shopping bag icon in the upper right of VSTS) and search for “redgate”, you’ll see the tasks.

2017-03-31 16_42_13-Search results - redgate _ Visual Studio Team Services , Visual Studio Marketpla

I picked the two on the right, the build and release tasks v2. The v1 tasks aren’t in the marketplace, but if you’ve added them to your account, they’re still there and they work in your build and release pipelines.

Once I installed them, they appear in my list of extensions.

2017-03-31 16_44_56-Manage extensions - Visual Studio Team Services

Now I can edit my pipelines.

I’ve got a release pipeline that looks like this. Note that these are the v1 plugins, because there’s no v2 on the name.

2017-04-01 11_25_17-SimpleTalk Release Pipeline 1 - Visual Studio Team Services 

My plan is to upgrade these to the new extensions, however, there are lots of settings. If you look to the right for any of the tasks, for example the Create task, there are lots of boxes to fill in.

2017-04-01 11_26_34-SimpleTalk Release Pipeline 1 - Visual Studio Team Services

This is expected as if I were doing this manually, I’d expect to have  along set of commands or switches to programs or parameters, that I’d need to pass in to a process. After all, the mechanics of implementing CI or CD aren’t hard, but they do have lots of moving parts.

My first step in making this easier is to add a new task. To do that, I click on the “Add tasks” button above. This will default me to the set of tasks for me particular function, in this case, release (deploy). I scroll down to see the v2 tasks.

2017-04-01 11_29_23-SimpleTalk Release Pipeline 1 - Visual Studio Team Services

Here I see the v1 and v2 tasks because they’re both listed in my set of tasks because I’ve installed both into my account. In this case, I’ll pick the “2” version of this task.

Once this is added, I need to configure the task. In my case, the easiest way to do this was to click on the v1, copy the contents of a text box, and then click on the v2 task and paste the values in there.

Once I had done this, I have both tasks listed. For this particular pipeline, I had actually added a new Agent Phase, separating my tasks out. There wasn’t any great benefit to this, though I can then just delete one whole phase and all the tasks in it (once things are working).

2017-04-01 11_33_41-ST Pipeline_Mobile - Visual Studio Team Services

After copying all the settings from one to the other, and checking that v2 was configured the same, I was ready to test. I first went to each of the v1 tasks and unchecked the “enabled” box. This means those tasks won’t run, but they’re still in the definition.

2017-04-01 11_33_52-ST Pipeline_Mobile - Visual Studio Team Services

After that, I created a release and deployed it. Not every deployment worked. My first ones did, but when I tried to hit the production environment (the far right), it failed early on. This list is from newest to oldest, so I had a few things to work out here.

2017-04-01 11_36_20-ST Pipeline_Mobile - Visual Studio Team Services

As you can see, this isn’t necessarily a simple, easy process. In my case, the v2 tasks have some additional path items, and I had to sort those, I also had firewall issues to production as I was traveling between the tests, which meant forgetting, and then needing, to reset the firewall rules.

However, it’s all good now.

2017-04-01 11_38_21-ST Pipeline_Mobile - Visual Studio Team Services

I would encourage you to upgrade your DLM v1 tasks to v2. There are a few bug fixes, some of the deprecated cmdlets are removed, and these work slightly better. I know have pathing options to separate my environments on the agent and can easily see the code being run.

I’ll talk about my test procedure for upgrading in the next post, because I think trying to do too much at once is how I’ve gotten into trouble and created stress for myself in the past. Now I have a better idea. 

Posted in Blog | Tagged , , , | Leave a comment