Friday, January 25, 2019

Connecting Azure DevOps with Lifecycle Services for Release pipelines

A week ago, Microsoft annouced the release of a new Azure DevOps Task. Followed by more details the setup from the announcements author, Joris de Gruyter. And with the help of this post by Business Application MVP, Munib Ahmed, I sat down and ran through the setup a couple of days ago.

I wanted to briefly add some additional thoughts as well, some considerations of my own, while we wait for the official documentation.

This new feature is very quick and easy to setup, and is something everyone should adopt sooner than later. It shaves off the time spent downloading the complete build artifact somewhere and then uploading it to the Dynamics Lifecycle Services Project Asset Library. After a successful build of the full application you will get the package automatically "released" and uploaded to the asset library.

We expect more "tasks" to be added, allowing us to setup a release pipeline that also let us automatically install a successful build on a designated target environment. So getting this setup now, and have the connection working, will set the stage for adding the next features as they are announced.

Here are some of the requirements and considerations while you set this up:

  • You need to register an Azure Application of type Native (public, multi-tenant). While it is said you can use the preview experience to register the app in the Azure Portal, I had to use the "non-preview" experience, to ensure I got a correct Native Azure app registration, and not a Web app. While you can add the necessary permissions setup (user_impersonation), you need to run the admin consent for the permissions to work. If you are setting it up, and you are not Global Admin or Application Admin, then you will need to get someone else with necessary permissions to run the admin consent part. 
  • The connection also requires user credentials as part of the setup. This should not be a just any user, if you think about it. You don't want the connection to break just because the password was changed, or the user was disabled. Also multi-factor (or two-factor) authentication will not work here. So you might want to create yourself a dedicated user for this connection. The user does not need any licenses, just a normal cloud user you have setup and logged on with at least once. Also the user needs to be added to the Lifecycle Services project with at least Team member permissions (access to upload to Asset Library). Log on to LCS with the user once and verify access.
  • When you create the release pipeline for the first time, you will need to install the Azure DevOps task extension. Search for "Dynamics" and you should find the "Dynamics 365 Unified Operations Tools". If you are doing the setup with a user that has access to many different Azure DevOps organizations (ie. you're the partner developer doing this cross multiple customers), you will need to make sure you install it on the correct Azure DevOps Organization. When it is installed, you will have to refresh the task selection to see the new available task(s), including the new task named "Dynamics Lifecycle Services (LCS) Asset Upload". 
  • When configuring the task, you will want to use variables to replace parts of the strings, like the name of the asset, the description, and so on. When you run a release, one of the steps actually list out all the available variables, though with a slightly different syntax. Have a look at the list of available variables on this article, plus the top on how to see the values they are populated with upon a run.
If you already have a successful build ready, go ahead and setup the release pipeline and run it once to see it succeed - or fail. If it fails while trying to connect, it could be one of the following errors:
  • AADSTS65001: The user or administrator has not consented to use the application with ID '***' named 'NAME_OF_APP'. Send an interactive authorization request for this user and resource.
    Here you have not successfully run the admin consent. Someone with proper permissions needs to run the "Grant permissions" in the Azure Portal.
  • AADSTS50076: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access.
    This is most likely because the user credentials used on the connection is secured with multi-factor authentication. Either use a different account without MFA, or disable it. Most likely it is on for the account for a reason. 
I would strongly encourage everyone to set this up, and feel free to use the community forums, Twitter or ask on the comments section, if you run into issues.

Saturday, January 12, 2019

Important changes to how we will update the production going forward

Not long ago, it was published on docs some updates on how we will be doing self-servicing of our non-production environments. There is one particular statement I want to address with this post:
You will need to provide a combined deployable package for customizations. That is, all custom extension packages, including ISV packages, must be deployed as a single software deployable package. You will not be able to deploy one module at a time. This was always a recommended best practice and is now enforced.
It has been recommended for a long while to always ensure any changes to the application, either updates or when adding new modules, are serviced first to the sandbox and then to production in one single package. And then I mean the same single package containing everything which is not already in standard. This would include:
  • ISV solutions and deliveries
  • Customer specific customizations (ie from your partner)
  • Hotfixes (relevant only for version 8.0 and older supporting individual hotfixes)
It may seem counter-intuitive and unnecessary to do it like this. You may argue we should be able to push individual modules and updates to the sandbox and to production. Why enforce "single package"? What are the benefits of this?

The core reason is that we are moving away from actually updating the application, but we are instead actually replacing the application. The application becomes an immutable piece of software, that does not change or is susceptible to change. 

This means the "updated" version of the application instead is setup and runs on a new instance, replacing the previous version of the application. The new pattern will sustain and ensure the following expectations:

Predictability - One single package having all the code and metadata, and all the module dependencies secured, ensures the same behavior every time it is used in a deployment.

Repeatability - Repeating the same package deployment on the next environment should have little to no risk, for example when repeating the install done in sandbox when installing in production.

Recovery and Rollback - The single package is a last good known state, which we can rollback to, in case we need to recover due to deployment failure. 

Scale-out - Reusing the same single package lets us easily and safely repeat deployment on new instances, and allowing for a safe and easy way to scale out.

Portability - The same package can be safely used if you for whatever reason needs to relocate your installation somewhere else around the world.

Think about all the benefits of this. It is music in my ears. 

In theory, today, you could service your sandbox and production with individual isolated packages. You could handle the order you install the packages, to ensure not breaking dependencies. You could install in a test environment to try test updated versions of packages, in order to see if they broke something in conjunction with any of the other existing packages. It is possible - but it is not recommended. Instead, it is recommended to use the build and release procedures to create a single deployable package, containing all the changes, across all modules and packages.

Reading the announcement of the new immutable pattern for self-servicing environments is great news. It may change the way you've published updates in the past, if you used to push individual "delta" packages. If you've published single deployable packages from your build environment, you're following best practices, and the enforcement will not change how you've done things. 

What do you think about this? Share your comments and thoughts either in the comments or on Twitter.

Saturday, December 8, 2018

Controlling the Visual Studio workspaces to your Dyn365FO developers

Introduction

At the time of writing, doing development for Microsoft Dynamics 365 for Finance and Operations (FnO) require a dedicated development machine. This machine is pre-configured with Visual Studio extension that allow for achieving FnO development. One important and perhaps peculiar fact with these environments is the fixed disk location where you can create, edit, save and build your modifications. The folder is more often referred to the "Package Local Directory", but I will use the acronym PLD in the rest of this article. This is the folder containing the packages/modules, and you may have them either with or without source code. Typically vendor solutions are shipped only in their binary form, meaning you do not have the "design time" metadata, but only the "run time" payload. As for Microsofts own packages/modules, you will typically have both the metadata and the binaries, allowing you to view and step through code while developing and debugging. And of course, your customizations (assuming you are a developer) will have its metadata, and after your local first build, its binary counterpart (plus other artifacts, depending on what you're creating).

Workspace

This post is about the "workspaces" in Visual Studio.

So what is the deal with the "workspace"? Well, it is a necessity when you want to develop. It is basically what defines the paths to your local copy of the code you are sharing with the rest of the team. And for FnO there is one path that is fixed, out of the box, and that is the PLD.  You are free to setup other folders, and share them with the team, but the workspace needs to have at least one entry that refers to the PLD. All customizations you want to share with the team, and share with the BUILD machine, needs to be in the PLD, and added to source control and committed to the central code repository.



Ok, so that is all fine. Any other considerations?

Yes! The developer who creates the workspace actually ends up being the "owner" of the workspace, on that machine. So if another developer connects to the same machine and wants to develop, using their own user and credentials, the second developer needs to be able to work against a workspace pointing to the PLD. Otherwise, the second developer is blocked from doing development.

So what's the problem? Well, by default, a newly created workspace is private and only the owner of the workspace can use it. To make things worse, any other user who wants to create their own workspace on the same machine will not be able to point it to the PLD. Only one workspace can point to a single folder at that machine at a time, and the PLD is such a fixed single folder (at the time of writing).

There is a solution, though. The initial owner needs to change the workspace from "Private" to "Public", allowing any other developer connecting to the same machine reuse the initial workspace.



This is a simple solution where the same development machine is shared between developers. It is also smart if for any reason a developer has pending changes on that machine, then takes a few weeks of holiday, and another developer needs to connect and commit them. Yea, that can happen.

Administer the Workspaces

Ok, so what if the developers create the workspaces themselves, and set them up as Private, forgets and then someone else have to reuse it. Or if you simply want to go through and check the created workspaces out there.

Well, the workspaces and information about them is also stored centrally, and someone with the "AdminWorkspaces" privilege can change them (a permission by default granted to the Azure DevOps (VSTS) Organization Security group called "Project Collections Administrators").

So in this post I will show how you can do this. There are several articles and posts out there discussing this, but it's always nice to share this in the context of Dyn365FO development, in my opinion.

If you have the necessary permissions, you can run the "Developer Command Promt for VS2015" available on one of your development VMs. I here assume you have run Visual Studio at least once, and connected to your Azure DevOps (VSTS) organization you are working towards.



If you run the following command, it will list all the workspaces created.
tf workspaces /owner:*

You will see a list of workspaces by the name, owner and machine. The next thing you can do is edit one of the workspaces by running the following command:
tf workspace WORKSPACENAME;OWNER  

When referring the owner, use the email address for simplicity.

The workspace form now lets you change the properties like permissions from Private to Public, and you can even change the owner (again, use the email address) if you for example need to take over the workspace of someone who has since been deleted as user.

You can also remove old and obsolete workspaces by using the following command:
tf workspace /delete WORKSPACENAME;OWNER

It goes without saying, changing the workspaces while they are in use, is obviously not very smart. Change the workspaces with care, or you might ruin someone elses work and day.

Using Team Foundation Sidekicks for VS2015

There is another option as well, a free tool that also lets you administer workspaces, if your user has the necessary permissions.
You can download it from here:
http://www.attrice.info/downloads/index.htm#tfssidekicks2015
(Tip! Use Google Chrome to download the MSI, if Edge/IE blocks you)

Sunday, November 25, 2018

Considerations when "upgrading" Dyn365FO from 8.0 to 8.1

Version 8.0 of Microsoft Dynamics 365 for Finance and Operations was released summer 2018. Just a few months later, in October, version 8.1 was released. If you have environments running version 8.0, let it be development environments, demo environments, or even production and (tier 2+) sandboxes, you might be thinking about getting them "upgraded" to 8.1.
It's not really an upgrade, but actually rather an update.

The overall process is actually a lot easier compared with coming from 7.x. I did a series of posts on how to get started here:
https://yetanotherdynamicsaxblog.blogspot.com/2018/11/upgrade-from-7x-to-8-series-post-1.html

Microsoft outlines the process in one single article here:
https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/migration-upgrade/appupdate-80-81

Why update?

One of the main differences between 8.0 and 8.1 is the latter version will be a lot easier to service with updates. Version 8.0 still supports individual application hotfixes, meaning you will download and apply them, put them in VSTS, just as you would with 7.x. You could argue the possibility to pick individual hotfixes and avoid taking all updates is a good thing, but in fact it is not the way forward. Instead of thinking that you may have to avoid hotfixes, and potentially have to "roll back" updates that breaks you, you need to shift to a mindset where any ongoing issues are immediately reported back to Microsoft which allows them to ship new updates that resolves any issues, not only for you, but for us all. With that mindset, you will want to take the 8.1 version, which does not allow for individual hotfixes, but instead gives you everything cumulative at the point you pick updates. This is also how "One Version" will behave, and on April 2019 you will be getting updates in this fashion.


So in effect, when servicing 8.1+ you get only one update tile, and it contains everything, and you download everything cumulative. You'll use the complete update package to patch your environments, and there is no need to put the updates in VSTS either. Things are just so much easier.

Development and build environments

Even though Microsoft has a Software Deployable Package that does the update from 8.0 to 8.1 in the Shared Asset library in LCS, it is recommended that you deploy new 8.1 build and development environments. Why is that, you may ask. For a development environment, you will have both source code and a runtime (code compiled). Your 8.0 development environment might even have been updated with hotfixes, added back in time. Part of the process is to remove any 8.0 updates, and start from scratch with 8.1. So when you start removing already committed Microsoft application updates form Azure DevOps (VSTS), you cannot avoid this to also reflect your local copy of the source code.

But you do not need to compile Microsofts packages, so who cares if the code is wrong? Well, what if you want to debug, extend, view code? Even though you do not need to recompile Microsofts packages, you run the risk of having invalid, incomplete or even erroneous code on your development environment. So it just follows your best option is to redeploy a new set of development boxes and of course build box(es), and depending on your choice of server size and storage, the deploy of new servers they might be ready for you within 3-4 hours.

But before you connect the newly deployed development environments to the source code, it is paramount that you prepare a new 8.1 branch, which is clean from updates. It may contain 8.0 extension modules, but not any Microsoft modules. You can prepare all of this while the new environments are being deployed.

Non-development environments

What about demo, test and sandboxes? Well, typically you do not care about the source code on the demo boxes (even though it might be there), and as for acceptance test sandboxes, where you do not even have Visual Studio, it definitely doesn't matter. These environments you could just go ahead and update using the Software Deployable Package.



Well, unfortunately it might not just be that simple. If the environment has other non-Microsoft packages installed, LCS will prevent you from simply apply the update package. You may have some ISV-solutions or some package you've created and released, and then installed on the environment, through LCS.
LCS knows about this, and can list the non-Microsoft packages installed. In fact, if you try apply the update package, LCS will stop you, and list the packages blocking you.



Error: "Modules on the environment do not match with modules in the package. Missing modules: [...]"

In order to continue, you will need to get a pre-compiled version of these modules where they were built on a 8.1 environment. Depending on your scenario, that either means getting the 8.1 version from a vendor or partner, or simply just get your package built and released through your new and shiny 8.1 boxes.

As it is stated in the upgrade guide, you are recommended to prepare yourself one single build release of all the extension modules and packages. When you have the 8.1 package ready in the Asset Library, you can simply merge it with the update package, and execute the update.


If all your demo and test environments where using the same set of non-Microsoft packages and modules, you'll simply reuse the same merged package to update all of them.

Happy updating!

Saturday, November 17, 2018

Setup a cloud storage for database copy operations

This post will show you how quickly and easily you can setup a cloud storage, and then copy the database around between your environments. Having said that, we are waiting on this feature in LCS, and eventually there will be tooling that does this for us in a fully managed way. However, while we are waiting, we can set this up ourselves.

Setup the Storage Account

You will (obviously) need an Azure Subscription for this to work. All of the steps below can be completed using a PowerShell script, so the advanced users will probably write that up. But I will here show have you can easily get this done with some clicking around. Still, you can set this all up in matter of minutes manually.

Start with opening the Azure Portal and open "Storage Accounts". You will create yourself a new one.



You will ned to choose a Resource Group, or create a new one. I typically have a Resource Group I put "DynOps" stuff in, like this Storage Account.

I want to make this a cheap account, so I tweak the settings to save money. I opt for only Local Redundancy and a Cold Tier. Perhaps the most important setting is the Region. You will want to choose a region that is the same as the VMs you are using. You get better performance and save some money (not much, though, but still).

Oh, and also worth mentioning, the account name must be unique. There are a few naming guidelines for this, but simply put you will probably prefix it with some company name abbreviation. If you accidentally pick something already picked, you won't be able to submit the form, for good measure.



It only takes a few minutes for Azure to spin up the new account, so sit back, relax and take a sip of that cold coffee you've forgot to enjoy while it was still warm.

The next thing you'll do is open the newly created Storage Account, and then scroll down on the "things you can do with it" and locate "Blobs". You will create yourself a new blob, give it a name, like for example "backups" or just "blob". Take note of the name, as you will need it later.



Then you will want to get the Access key. About the Access key, it needs to be kept as secret as possible, since it basically grants access to the things you put into this Storage Account. If you later worry that the key has been compromised, you can regenerate the Access key, but then your own routines will have to get updated as well. There are some other ways to secure usage of the Storage Account, but for the sake of simplicity I am using the Access key in this example.



And now you are set. That entire thing literally takes just a few minutes, if the Azure Portal behaves and you didn't mess anything up.

Using the Storage Account

I've become an avid user of the PowerShell library D365FO.tools, so for the next example I will be using it. It is super easy to install and setup, as long as the VM has an Internet connection. I'm sure you will love it too.

Assuming it is installed, I will first run a command to save the cloud Storage Account information on the machine (using the popular PSFramework). This command will actually save the information in the Registry.


# Fill in your own values
$params = @{
    Name = 'Default'                      # Just a name, because you can add multiple configurations and switch between them
    AccountId = 'uniqueaccountname'       # Name of the storage account in Azure
    Blobname = 'backups'                  # Name of the Blog on the Storage Account
    AccessToken = 'long_secret_token'     # The Access key 
}

# Create the storage configuration locally on the machine
Add-D365AzureStorageConfig @params -ConfigStorageLocation System -Verbose 

Now let's assume you ran the command below to extract a bacpac of your sandbox Tier2 environment.

Import-Module d365fo.tools
 
$dbsettings = Get-D365DatabaseAccess
 
$baseParams = @{
    DatabaseServer = $dbsettings.DbServer
    SqlUser = 'sqladmin'
    SqlPwd = 'SQLADMIN_PASSWORD_FROM_LCS'
    Verbose = $true  
}
$params = $baseParams + @{
    ExportModeTier2 = $true
    DatabaseName = $dbsettings.Database
    NewDatabaseName = $($dbsettings.Database + '_adhoc')
    BacpacFile = 'D:\Backup\sandbox_adhoc.bacpac'
}
 
Remove-D365Database @baseParams -DatabaseName $($params.NewDatabaseName)
New-D365Bacpac @params

You now want to upload the bacpac (database backup) file to the blob in your cloud Storage Account using the following PowerShell script.

Set-D365ActiveAzureStorageConfig -Name 'Default' 
 
$StorageParams = Get-D365ActiveAzureStorageConfig
Invoke-D365AzureStorageUpload @StorageParams -Filepath 'D:\Backup\sandbox_adhoc.bacpac' -DeleteOnUpload 

The next thing you do, is jump over to the VM (Tier1, onebox) where you want to download the bacpac. Obviously, D365FO.tools must be installed there as well. Assuming it is, and assuming you've also run the command above to save the cloud Storage Account information on the machine, you can run the following PowerShell script to download.

Set-D365ActiveAzureStorageConfig -Name 'Default' 
 
$StorageParams = Get-D365ActiveAzureStorageConfig
Invoke-D365AzureStorageDownload @StorageParams -Path 'D:\Backup' -FileName 'sandbox_adhoc.bacpac'

Finally, you would run something like this to import the bacpac to the target VM.

Import-Module d365fo.tools
 
$bacpacFile = 'D:\Backup\sandbox_adhoc.bacpac'
$sourceDatabaseName = "AxDB_Source_$(Get-Date -UFormat "%y%m%d%H%M")"
 
#Remove any old temp source DB
Remove-D365Database -DatabaseName $sourceDatabaseName -Verbose
 
# Import the bacpac to local SQL Server
Import-D365Bacpac -ImportModeTier1 -BacpacFile $bacpacFile -NewDatabaseName $sourceDatabaseName -Verbose
 
#Remove any old AxDB backup (if exists)
Remove-D365Database -DatabaseName 'AxDB_original' -Verbose
 
#Stop local environment components
Stop-D365Environment -All
 
#Switch AxDB with source DB
Switch-D365ActiveDatabase -DatabaseName 'AxDB' -NewDatabaseName $sourceDatabaseName -Verbose
 
Start-D365Environment -All

Isn't that neat? Now you have a way to copy the database around, while we're waiting for this to be completely supported out of the box in LCS - fingers crossed!

Thursday, November 1, 2018

Upgrade from 7.x to 8.+ series | Post 5 | Upgrade Sandbox and finally Production

Introduction

[EDIT]: Changes since November 2018 has forced me to made some changes to these series. I will point out the changes.

In these series of posts. I will try to run you through the process of how you can complete the upgrade from 7.x to 8.+ of Dynamics 365 for Finance and Operations.

Quick navigation:
Upgrade from 7.x to 8.+ series | Post 1 | Start in LCS
Upgrade from 7.x to 8.+ series | Post 2 | Deploy Dev and Grab source DB
Upgrade from 7.x to 8.+ series | Post 3 | Validate Code and Data in Dev

Prepare a sandbox upgrade for validation

[EDIT]: This section has been modified.

Before you can go ahead and request an upgrade of Production, you will want to do a pre-production validation in the sandbox environment. You may read the details here:
https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/migration-upgrade/upgrade-latest-update#upgrade-your-tier2-standard-acceptance-test-sandbox-environment

Before you start this process, you will want to make sure you have the following uploaded to LCS Asset Library:
  • Upgraded application ("Application deployable package"), downloaded or released from the successfull build
  • The Update package ("Platform and application binary package") which you prepared on step 3 in the series, and installed on the build in step 4 of the series
You will service the sandbox and install the packages. If you're smart, you will merge the two packages into one single package, and service them together in one single operation. Merging package works as long as long as they are on same platform version.

Now you can let the users start hammering on the system to potentially discover everything is flawless (knock on wood).

If you do not have any Production deployed yet, the steps are:
  1. Redeploy sandbox with target version. Make sure to select the upgraded application package. If you don't, you will have to install it afterwards, before you continue to the next step.
  2. Import the upgraded bacpac from step 3 in the series. Here you can use the tooling in LCS if the database was uploaded into the Project Asset Library.
  3. Validate!

Production

When you have validated the sandbox, and you are ready to upgrade Production, you will replay the steps you did in the sandbox, but this time in Production

Good luck!

Upgrade from 7.x to 8.+ series | Post 4 | Setup a new Build

Introduction

[EDIT]: Changes since November 2018 has forced me to made some changes to these series. I will point out the changes.

In these series of posts. I will try to run you through the process of how you can complete the upgrade from 7.x to 8.+ of Dynamics 365 for Finance and Operations.

Quick navigation:
Upgrade from 7.x to 8.+ series | Post 1 | Start in LCS
Upgrade from 7.x to 8.+ series | Post 2 | Deploy Dev and Grab source DB
Upgrade from 7.x to 8.+ series | Post 3 | Validate Code and Data in Dev

Some preparations before deploying Build VM

Basically, what we want to do is to have the new 8+ branch the build environment will pull code from. Beyond that you may want to have additional development branch to isolate ongoing development in the future, but I've left that out of the scope of this article.
If you've read the previous posts, you know the Code Upgrade in LCS created a "release" branch folder with a prepared upgraded application, and given that you've completed the code upgrade and validation as mentioned in the previous post, you should now be able to copy the result over to a new main branch for 8+.

The flow can be displayed sort of like this:


Now, obviously you're most likely going to delete/remove the old main branch and possibly also the "release" branch in the future. But the flow above can still be achieved. There are many ways to actually do this, and some have very strong opinions on how to branch the source.

You can easily create a new main branch by using the prepared "release" as source. You can do this using Source Code Explorer inside of Visual Studio running on your development VM.



You will simply give the new branch a unique name, separating it from the old main.


The name of the branch can actually be changed later, if that bothers you. However, we will deploy a Build environment later, and this will create a Build definition that needs the branch name to be correct - or the build definition will not work.

Don't forget, your changes locally on the VM will need to be committed to Azure DevOps (VSTS).



Another thing we will want to do is to create ourselves an isolated Agent Pool in Azure DevOps (VSTS). We want to make sure only 8+ build agents are in this pool of agents. You will need at least one, but who knows if you will add more in the future.

You will need some permissions in Azure DevOps (VSTS) to create this, but start at the Organization level and create a new Pool. I named it D365FO81 (since it will be used for 8.1.x). I have lots of projects not related to Dynamics, so I didn't want to push the agent pool to all projects.



I then opened the Project itself and added the Agent Pool to the project.


Deploy Build VM

[EDIT]: This section has been edited.
Now, we are ready to head back to LCS and deploy a Build VM. And with the preparation above, we can fill out the VSTS-part like this, and it will make sure to put the build agent on the right pool, plus make sure the deployed build defintion points to the right branch.



Select the correct topology, and if you're deploying this on a private/self hosted Azure Subscription, you can chose a setup with DS13v2 and 14 standard disks of size 64GB. Again, leaning on the community here to learn what they recommend. These things change over time, and I can't promise I'll get back to this post and update it.

If you deleted the existing MS Hosted build environment, and deploy a new MS Hosted, you won't get any options to decide on VM size or disk setup.



Notice I fill in the name of the Agent Pool and the name of the branch. I also give the agent a unique name. It will take quite some time before the Build environment is up and running.

EDIT: Before you continue, go ahead and install the same update package installed on the Development environment from step 3 in my series. Installing this same "Platform and application binary package" on the Build environment will ensure the build is running on the exact same version, and the artifact created from the build pipleline is of the same version. The next time, in the future, you want to get more updates of the platform and application, you can create the update package through the Update tiles on the Build environment. 

When it is, you will go ahead and schedule a build on the new Build Definition. The job will be picked up by the right Agent Pool, and then picked up by the agent sitting on the Build VM.

When the Build is complete, make sure to upload the Deployable Package to the LCS Asset Library. You will need it for the final post.