Quantcast
Channel: Waldo's Blog
Viewing all 339 articles
Browse latest View live

Dependency Analysis Tool (ALDependencyAnalysis)

$
0
0

Remember this post?  Probably not.  Nearly a year ago at Directions US, I showed some “how did I do stuff” during a number of sessions.  And it ended with a lot of feedback, which came down to: “can I have it”?  So, that’s where I wrote a post “I have work to do” ;-).

The “DevOps”-part of the work is done: ALOps is available and well used :-).

But the second promise – the “Dependency Analysis”, I only completed in November 2019 – and totally forgot to blog about it.  In my defense – I did explain it at NAVTechDays, and that video IS online.  You can find it here: NAV TechDays 2019 – Development Methodologies for the future (This is a link to the point in the video that explains the “dependency analysis” part).

What is it all about?

Well, in the move from C/AL to AL, you have a few options. 

In short:

either you migrate your old code to AL (you know, the txt2al-way of doing things) and basically end up with “old wine in new bottles”.

Or you rewrite.  And if you rewrite, you either rewrite everything in just one app, or you take the opportunity and divide your old monolith into a multitude of apps.

In my opinion, it does make sense to rewrite the solution/product into AL, and take the opportunity to split it in multiple apps – and make dependencies if necessary.

Thing is – when you have a product that multiple people have been working on, for multiple years – there is not one single person that have a overall overview of all created functionalities – let alone how they were developed (and therefore dependent from each other).  But – if you are rewriting your product, you probably WANT to have a complete overview of all this, INCLUDING a view on the dependency.

So you have to analyse that.  Hence the name: “Dependency Analysis“.

How do I analyze an old codebase, and still have a complete overview of the entire functionality – and how do I decide on how to split it in apps?

In my opinion, the only way to do that, is to automate the crap out of it.  The only way to not forget anything, is to not use your memory.

Overview

In my company, we created a set of tools that I’d like to share with you.  It contains:

  • PowerShell scripts that analyzes the C/AL code
  • A Business Central App that has an API to upload the data from PowerShell, and handle it for your analysis

All code is saved in GitHub, and you can find it here: https://github.com/waldo1001/ALDependencyAnalysis

All contributions are very welcome ;-). 

Flow

On how to use it, I’d like to refer you to the video, of course – it will get you started in 20 minutes, and explain you the basic steps.  In fact, the past couple of months, I referred a few partners to this video, and they were all able to do their dependency analysis – so I guess it’s descriptive enough, and the tools work (well enough ;-)).  But still, let me give you a short overview of the steps I think you should take – with a few remarks I think are interesting to consider.

Step 0a: Set up waldo.model.tools

You might remember this blogpost: C/AL Source Code Analysis with PowerShell.  Well, it’s that tool we will be using for the next steps. It can analyze C/AL code – so it’s right what we need ;-).  And apparently people are able to get it running.  I actually came across this blogpost where Ricardo Paiva Moinhos used this tool to create a generic datamigration script from C/AL to AL.  Awesome!

Step 0b: Set up a Business Central environment with the ALDependencyAnalysis app

This is actually as simple as cloning the app from the ALDependencyAnalysis-repo, and publish it to the environment where you would like to perform the analysis.  In my case, a simple docker container on my laptop.  Make sure APIs are available .. because the app will deploy some custom APIs for us to be able to upload data.

When you installed the app, you’ll have a new rolecenter: the “Dependency Analysis Rolecenter“.

Welcome to your “Dependency Analysis Control Center” ;-).

Step 1: Get all objects from C/AL and automatically tag it if possible

The assumption here is that you have exported all objects to C/AL (ALL, also default objects, because most likely, you did changes in these, and you’d want to have the references on where you did changes).

With the waldo.model.tools, you can analyze the C/AL Code – so we’ll use that.  In the Scripts-folder of the ALDependencyAnalysis-repo, you’ll find the scripts that I used to upload the necessary  stuff to perform the analysis.

So for uploading the objects, you need to run the “prepare” script first to load the objects in PowerShell.  You’ll see that the scripts loads the model in the $Model variable, which will be used for the magic.  That variable will be quite big in memory ;-).  The prepare-script is also going to load all companies from your API – because it needs that in the upcoming scripts.

Next, there is the 1_UploadObjects.ps1 script, that is simply going to loop all objects from the $Model variable, and upload them via the API to your Business Central environment.

Module tagging

It is quite important that during the module, that you tag your object.  In a way, your object needs to have a “reason to exist”.  An “intent”.  A – let’s call it – a “module”.  This module-code usually is a piece of business logic that you added to the solution.  “FA” could be a module name for “Fixed Assets”, for example.  In this example, you see what I mean – all objects get a module (last column): the reason why they were created.

You can imagine that doing this manually, it’s huge job.  But probably you have some logic that can determine quite a lot of the object modules for you, like the prefix of an object, the range, or something like that.  So we created a function in  “ModelObject.Codeunit.al” that can handle that for you.  Just change in what you think works best for you!

Step 2: Manually correct/ignore modules

From the time you have all objects in the table of your app, it’s time to correct all modules so that every single object in that table is tagged by the right module.  Your procedure might not have been able to decently tag all objects, and further on in the analysis, it’s important to have the right names for all objects.

This is also where you would like to ignore the useless modules. Just imagine you already know which parts of your product you will skip .. then it makes no sense to take it further in your analysis.

Step 3: Get Where-Used per object

This is where it gets interesting (or at least in my opinion ;-)).

The idea is that we are going to create dependencies between these modules. Now, do know this:

  • We are able to analyze code, including things like “where used”
  • This information can be looped and saved (like we did with objects)
  • We tagged all objects with a module

So basically, we can find out which modules “use” or are “used by” other modules.  And – in an dependency analysis, that information is gold!

So – you can already imagine, there is another script in the Scripts-folder: 2_UploadObjectLinks.ps1.  That script is a slightly bit more complicated.  It will figure out all links, remove the ones that refer to themselves, build an object collection, loop it, and sends it to the assigned API, resulting in yet another few tables that get filled.

The “Object Links” is the “raw data”, the links between objects.  So that’s basically what the PowerShell script was able to find out.  But while uploading this data, the app also fills the “Module Links.  And it speaks for itself: this is the really interesting table that you want to analyze..

Step 4: Analyze dependencies per module

To look at a bunch of data in a table is hard. Since we’re talking about “links”, why not use “graphviz” to visualize what we have.  And that’s exactly what we did – we used this tool: http://www.webgraphviz.com/ .  A very simple way to show a (dependency) graph – by easily create a bunch of text that can be copied in this online tool.  And that’s exactly what we can do now.  With the action “Show Full Graphiz”, it shows a message.  Just copy this message to the webgraphiz tool, and you’ll have a visual representation of the interdependencies of all modules of your product.  Like we did:

Step 5: Solve circular dependencies

You might ask “what are circular dependencies.  Well – easy: just imagine there are a bunch of dependencies, but they make a circle.  Like:

  • A depends on B
  • B depends on C
  • C depends on A

Or in even with more:

Well – if you have a big monolith, with a bunch of modules with a lot of interdependencies, your graph may look like I showed above – and all these red arrows, basically indicate “you have work to do”.

You can solve these interdependencies by either “not implementing modules anymore” (you can simply do that by toggling the ignore (action)), or start to  combine modules if you realize it doesn’t make sense to split functionalities in modules.  In any case: you can’t implement modules that are circularly dependent.

Step 6: manually create App-layer

Once you solved all dependencies, you might want to decide to combine multiple modules in a set of apps.

Now, this step is obviously not mandatory – if you want to create a separate app for all your modules – please do.  Honestly, I wish I could turn back the time and had done that.  Or at least went a bit more extreme … but we didn’t .. We couldn’t image at the point having to maintain all those modules (+80) as apps.  So we continued analysis by simply creating an app-layer, and starting to assign modules to apps.  So, simply create a record for each app in the “Apps”-table, and assign an app for each module. 

Step 7: Analyze dependencies per app

Now, be careful.  Modules can have a decent dependency-flow (you solved it in step 5), but once you start combining again, you might end up with circular dependencies again.  Just look at this:

So again, you have this “Get Full Graph Text” action for Apps, which you can use to analyze.

Step 8: Solve circular dependencies

This is the last step!  Now you need to solve the circular dependencies again!  You can simply do that by combining modules in one app, move modules from app to app, split, or simply again remove modules  ;-).  You know what I mean – structure your functionality, and come up with an architecture that is possible as a combination of AL Apps.

We ended up with this:

And again – I wish I went a little bit more extreme on the “BASE” app – that would have helped us a lot more with new apps, that could use a part of the BASE app, but not all .. .

Anyway – for you to decide.

Conclusion

Look at this blogpost/solution as a way to get a good, mental picture on the monolith you might have in C/AL .. .  Or as one way to have a complete picture on it.  And when you have – it’s going to be so much easier to make decisions regarding dependencies .. or things to ignore .. or .. .

Disclaimer

Do NOT judge my code, please.  It has been developed because we needed a tool, quickly, for one time only – not to be sold, not to be used for anything else.  I just decided to share it because I noticed that many people were interested. 

The tool is there AS IS.  I’m not going to support, nor update it.  Any contributions are always welcome, of course ;-).

Enjoy!


Multi Root Workspaces in VSCode for AL Development – Episode 2

$
0
0

Remember my blog post on “Multi-root workspaces in VSCode for AL Development“?  If not – it might be interesting to read first, because this is in fact an “extension” (get it?) on it .. so you might say this blogpost “depends” on that one ;-).

I concluded the post with some scripts – and that’s the part I’d like to extend a bit – because I needed more functionality since latest blogpost.  And I’d like to refer to these scripts again – and tell you a bit more on how they could make your life a bit easier as well .. .

Branching

The main part that I extended is the “Git”-part.  And as an example, I’d want to take you to this blog post from Michael Megel.  Michael talks about the “Release flow” – and how much it makes sense for ISV partners in Business Central.  Well .. now take into account the multitude of apps that we might have.  In our company, we have 22 apps at this moment (x2 if you count the corresponding test-apps as well). 

In terms of Git/DevOps, that’s 22 repositories.  In terms of VSCode, that’s 44 workspaces.  In terms of release management: that’s branching 22 repositories the very same: if I need to create a release branch, I need to create the same release branch over all repositories at the same time.

So indeed – we choose to NOT have a release per app – but rather have one release for all apps – so when we create a release branch – we basically have to create the same branch-name in all apps.  Fairly simple – but if you have to do that manually for 22 repos – that’s going to be tedious (and any repetitive job that is executed manually, just cries for mistakes). 

This is just one example of many that needs to be solved when you have multiple apps/workspaces/.. .

Scripting

Indeed – that’s where scripting comes into place.  And having a multi-root workspace in combination with some PowerShell Scripts, just makes your life a bit easier.  Do notice though that I’m not making my scripts part of the multi root workspace.  That just doesn’t work – the Al Language Extension does “something” with the F5 key that kind of like disables the ability to run any PowerShell script that’s part of the same Multi Root environment (F8 does work – but I need F5 ;-)).  So – I always have a second VSCode open with my PS Scripts.

The scripts

The scripts are still located in the same place as mentioned in my previous blog, but I have more now.  So you have more now as well ;-).  Do notice it’s based on my PowerShell modules.  So I do advice you to install them, or not everything might work (thank you for the download count ;-)).  So – this is what I have today:

_Settings.ps1

Make sure all variables are correct in this file – otherwise all below scripts won’t work correctly.  As you can see in this script, it will not look at any workspace-file, but it find all app.json-files and treat all directories as “targets” for where to execute the below scripts.  I might change that behaviour in the future though – I don’t know yet – this works for me now ;-).

Apps_CompileAll.ps1

This script will compile all apps in your Multi Root Workspace, in the right order (it will use the scripts I blogged about here to determine the order).  And then it will call the “Compile-ALApp” that’s part of my module “Cloud.Ready.Software.NAV“, which will use the alc.exe in your user profile (basically from the AL Language extension in VSCode) to compile the app.  I don’t use this script too of the though – only when I really need to for example have all updated translations files.

Apps_LaunchJson_CopyToAll.ps1

Well – if you use one Dev environment, for all your apps – it’s good to have one vscode-file for all workspace.  I know there is a way to update your workspace file with configurations – but I personally don’t use that.  At least this way, I’m still able to easily just open the individual workspace, and still use the right launch.json.

Apps_OpenAppJsons.ps1

This is a strange one.  This script is actually going to copy a command to my clipboard that I can execute in the terminal of my MultiRoot workspace. It will open all App.json files.  Few reasons would be interesting:

  • you would like to manually open all manifests to do a similar change to all of them
  • You would like to just open a file of all workspaces to start the AL Compile and find all code analysis problems in one go (an app is only compiled when one of its files are opened)

Apps_Symbol_Cleanup.ps1

This script will remove all symbol files from all workspaces.  Especially when you change localization or version, this is really useful.

Apps_Symbol_Download.ps1

Yep – a loop for downloading all symbols for all workpaces.  It isn’t very useful though because in a fresh environment, it will most likely not be able to download all symbols – although, when I start a very clean environment, I usually cleanup all symbols and run this once – then at least I have “most” of them.  It doesn’t hurt to run this in the background ;-).

Apps_ZipTranslation.ps1

Simple loop to put all translation-files in one zip-file.  Easy to mail to your translator.

Git_CreateBranchFromMaster.ps1

This is where the branching comes in play.  This script will first synchronize master, and then start a new branch, all with the same name, for all your workpaces.  Especially interesting to synch branch-naming for multiple repositories (like necessary in this Release Flow as mentioned earlier).

Git_DiscardAll.ps1

Just imagine, you messed up – and you don’t want to do some kind of edit in all your workspaces, and want to execute a “Discard All” on all your workspaces.  That’s exactly what this script does.

Git_StageAndCommit.ps1

If you don’t want to discard, but edited files in all your workspaces, and you want the same commit message for all your repositories: just a script that does loop through your repos, and will stage and commit with that same message.

Git_SwitchBranch.ps1

This doesn’t need too much explanation: it will switch the branch to another branch.

Git_SyncMasterBranch.ps1

Again – exactly like it says – it will update the master-branch for all your workspaces.

Git_UpdateBranchFromMaster.ps1

This one will update a branch from master branch.  It will first sync the master branch (make sure it’s up-to-date with the remote), and then update the selected branch with master.  This might result in conflicts, which you have time to solve in VSCode.

So, on which workflows do you use it?

They are very useful in many scenarios .. .  Let me explain a few of them.. .

Translations

We try to translate in batches (no – developers are no translators.  Development languages or not “normal” languages, you know ;-)).  As such: send a bunch of translation files to the one that will translate.  When I get that back, I will import the files, and commit. The workflow is something like:

  • Create a branch for all workspaces (Script: Git_CreateBranchFromMaster)
    • Usually called “Translation”
  • Compile all to create an updated .g.xlf file (Script: Apps_CompileAll)
  • Use VSCode “XLIFFSync” extension to update all translated files
  • Commit “Before Translation” to git (Script: Git_StageAndCommit)
  • Create a zip-file (Script: Apps_ZipTranslation)
  • Import translated files to the right folders (manually)
  • Commit “After Translation” to git (Script: Git_StageAndCommit)
  • Pullrequest all changes to master (Manually in DevOps – Intentionally – I want PRs to be manual at all times (for now))

Release Flow

As mentioned above – there are many scenarios where you would like to “sync branch names” across multiple repositories, like creating the same release-branch for multiple repositories in an AL Dependency Architecture.

Simple:

  • Create a branch for all workspaces (Script: Git_CreateBranchFromMaster)
  • Create/modify the pipelines (usually yml files across all repos)
  • Commit to git (Script: Git_StageAndCommit)
  • Secure you branch in DevOps (Branch Policies)

Major upgrades

Microsoft comes with new major release twice a year.  And for major upgrades, it usually takes some time to prepare your stuff for the next upgrade.  We simply follow this workflow:

  • Create a branch for all workspaces (Script: Git_CreateBranchFromMaster)
  • Fix a particular problem
  • Commit to git (Script: Git_StageAndCommit)
  • Pullrequest to master when done (manually in DevOps)

And probably steps 2 and 3 has to be repeated a few times – and that’s again where the scripts become very useful ;-).

Conclusion

If you are a heavy user of multi root workspaces in AL Development – give these scripts a spin.  I encourage you ;-).

Microsoft Dynamics 365 Business Central Virtual Event, June 3rd, 2020

$
0
0

It was quite expected, I guess.  After alle the cancellations of Business Central conferences, like NAVTechDays, Directions, Days of Knowledge, .. , Microsoft announced today that they will host a first “Virtual Conference” called “Microsoft Dynamics 365 Business Central Virtual Event” and it will be held on June 3rd, 2020.

The content will be 16 pre-recorded sessions that will be available (on-demand) for 12 months:

  • What’s new: Dynamics 365 Business Central modern clients– part 1
  • What’s new: Dynamics 365 Business Central modern clients– part 2
  • What’s new: Visual Studio code and AL language
  • Managing access in Dynamics 365 Business Central online
  • Managing customer environments in Dynamics 365 Business Central online
  • What’s new: Dynamics 365 Business Central application
  • Overview: Dynamics 365 Business Central and Common Data Service integration
  • Interfaces and extensibility: Writing extensible and change-resilient code
  • Dynamics 365 Business Central: How to avoid breaking changes
  • What’s new: Dynamics 365 Business Central Server and Database
  • Dynamics 365 Business Central: Coding for performance
  • Deep dive: Partner telemetry in Azure Application Insights
  • Dynamics 365 Business Central: How to migrate your data from on-premises to online
  • Migrating data from Dynamics GP to Dynamics 365 Business Central online
  • Dynamics 365 Business Central: Your latest demo tools and resources
  • Introducing SmartList Designer for Business Central (this session will be published later – expected in July)

I have no idea what het user experience will be for a conference like this – but let’s find out and register here: https://aka.ms/virtual/businesscentral/2020RW1

And mark your agenda: June 3rd, 2020!

Getting not-out-of-the-box information with the out-of-the-box web client

$
0
0

A few days ago, I saw this tweet:

And that reminded me about a question I had a few weeks ago from my consultants on how to get more object-formation from the Web Client.  More in detail: in Belgium, we have 2 languages for a tiny country (NLB, FRB) that differ from the language used by developers (ENU).  Meaning: consultants speak another language than the developers, resulting in misunderstandings.

I actually had a very simple solution for them:

The Fields Table

For developers, a well known table with information about fields.  But hey, since we can “run tables” in the web client (and since this is pretty safe to do since these are not editable (and shouldn’t be – but that’s another discussion :D)), it was pretty easy to show the consultants an easy way to run tables.  It’s very well described by Microsoft on Microsoft Docs.  Just add “table=<tableid>” in the URL the right way, and you’re good to go.  So for running the “Fields table”, you could be using this URL: https://businesscentral.dynamics.com/?table=2000000041

And look at that wealth of information:

  • Data types
  • Field names
  • Field captions depending on the language you’re working in
  • Obsolete information
  • Data Classification information
  • ..

All a consultant could dream of to decently describe change requests and point developers to the right data, tables and fields.

This made me wonder though:

And can we easily even more from the web client?

Not all of the Business Central users, customers, consultants, … are developers.  So, can we still access this kind of information, without the access to code, VSCode or anything like that?

Yes we can. 

In fact, the starting point should be: how do I find objects?  Is there a list with objects?  And therefore also a list with these so-called system tables?

Well, you’ll need to …

learn how to find “AllObj”, and you’ll find it all!

AllObj is a system table that houses all objects (including the objects from Extensions), so if you go to this “kind of” url, you’ll find all objects in your system:

https://businesscentral.dynamics.com/?table=2000000038

You’ll see a very simple list of objects, and you can even see the app (package Id) it belongs to (not if that is important though …):

So – now you know how to find all objects and how to run objects.  You can run tables, reports, queries and pages, simply by constructing the right URL (pretty much the same as explained here).

System/Virtual tables

To find these special tables with system information, simply filter the “AllObj” table on “TableData” and scroll down to the system tables number range (ID range of 2.000.000.000 and above) and start browsing :-).  You’ll see that you don’t always have permission to read the content .. but if you do, you’d be surprised of the data that you can get out of the system.

Just a few pointers

Session information https://businesscentral.dynamics.com/?table=2000000009
All Objects https://businesscentral.dynamics.com/?table=2000000038
Fields https://businesscentral.dynamics.com/?table=2000000041
License Permission https://businesscentral.dynamics.com/?table=2000000043
Key https://businesscentral.dynamics.com/?table=2000000063
Record link https://businesscentral.dynamics.com/?table=2000000068
API Webhook Subscription https://businesscentral.dynamics.com/?table=2000000095
API Webhook Notification https://businesscentral.dynamics.com/?table=2000000096
Active Session https://businesscentral.dynamics.com/?table=2000000110
Session Event https://businesscentral.dynamics.com/?table=2000000111
Table Metadata https://businesscentral.dynamics.com/?table=2000000136
Codeunit Metadata https://businesscentral.dynamics.com/?table=2000000137
Page Metadata https://businesscentral.dynamics.com/?table=2000000138
Event Subscription https://businesscentral.dynamics.com/?page=9510

What if I get an error?

Well, that happens – like this one:

I don’t know why it does that – but do know you can always turn to a developer, that can try to apply the C/AL trick: just create a page in an extension and add all fields from the table and simply run that page.

Deploying from DevOps the right way: enabling External Deployment in OnPrem Business Central environments

$
0
0

It seems that lately, I’m only able to blog about something when I have a tool to share.. . That needs to change .. :-/. But not today. Today, I’m going to share yet another tool that we at the ALOps-team have been working on to serve many of our customers. And we decided to share it with the community. The tool is free, it will stay free … but, there is a …

Disclaimer

The tool is a hack, nothing more than that. We simulate the behavior of what we think happens on the SaaS environment when you upload an Extension through the Automation API. So, the tool is “as is”, there is no official support other than me not want you to suffer problems with it ;-). There is a github where you can share your feedback.
The tool is dependent on how Business Central will evolve in this matter – and we hope this will “just work” for many updates to come. It will work on an decent install of Business Central. Not on any kind of Copy/Paste or other non-standard installation.

Deploying extensions from DevOps, without a build agent at the customer site

The official statement from Microsoft for deploying apps from DevOps to your OnPrem Customers is: install a DevOps build agent. As you might know, build agents sometimes act not the way you want – and having to maintain a bunch on infrastructure that is not 100% under your control, isn’t something that you want either. Customers might install a windows update, or .. do whatever that makes your release pipeline not run anymore…

But what if…

.. we could just enable the Automation API (because, as you know, there is an ability to publish extensions with it) for OnPrem customers, and use that in our DevOps for our CD pipelines?
Well .. using the Automation API to publish an extension, is quite the same as using the “Upload Extension” action on the “Extension Management” page in Business Central:

Thing is – that doesn’t work OnPrem. So in a way – the “Upload Extension” functionality in the Automation API doesn’t work OnPrem either. The action simply isn’t available. And if you would run page 2507 (which is the upload wizard page) manually, it would simply show you the following message when you would try to upload an extension:

So – the question is .. How do we enable “External Deployment”.
Well, it’s just a setting on the Server Instance, by pointing to some kind of API Endpoint that the NST will call when anyone would upload an extension.

ALOps.ExternalDeployer

So, we created a PowerShell module, that makes it pretty easy to enable the External Deployer on any OnPrem environment. In fact, with 4 lines of PowerShell, you’ll have it up-and-running! Run this PowerShell on the environment that is running your NST where you would like to deploy to.

1. Install ALOps.ExternalDeployer: this will install the PowerShell module on the machine

install-module ALOps.ExternalDeployer -Force

2. Load the module: this will simply load the necessary commandlets in memory:

import-module ALOps.ExternalDeployer 

3. Install the External Deployer: this will install an agent that will take care of the app-publish and install whenever you upload an app through the Automation API, or the upload page.

Install-ALOpsExternalDeployer 

4. Link the ExternalDeployer to right NST: it will update and restart the NST with the settings needed for the External Deployer.

New-ALOpsExternalDeployer -ServerInstance BC

Done!

The easiest way to test it is to simply upload an extension through the Upload Extension wizard in Business Central. Thing is, in Business Central, the page isn’t accessible, but you can easily run any page by using the parameter “?page=2507” in the Webclient URL.
So – just run page 2507 to upload an Extension. Now, you’ll get this message:

That’s looking much better, isn’t it?
Next, since the “Deployment Status” isn’t available either from the “Tell Me”, you can also run that page by providing this parameter in the url: “?page=2508“.
Even if the upload would have failed, you get information in the page, just like you would in Business Central SaaS:

AND you can even drill down:

So .. It works! And this also means it will work through the Automation API. You can find all info on how to do that here: https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/itpro-introduction-to-automation-apis

And if you would like to do that with ALOps …

Well, pretty easy. There is an ALOps step “ALOps Extension API“, which has all necessary parameters to deploy. Just provide the right parameters, like:

  • API Interaction: Batch Publish (if you’d like to publish more than one extension at the same time)
  • API Endpoint
  • API Authentication

And you’re good to go! Here’s an example of one of our pipelines:

In our company, it’s all we use today. All deployments to all customers are using this external deployer. So rest assured – it’s well tested and very much approved ;-).
Enjoy!

Deploying from DevOps the right way (Part 2): Deploying to OnPrem Business Central environments with the automation API

$
0
0

You might have read my previous blogpost on how to enable the “external deployment” in an OnPrem Business Central environment. Well, that post deserved an “extension” as I didn’t provide examples on how to deploy with PowerShell – which you would be able to do within Azure DevOps.

Scenario

The scenario is still the same: you have all these OnPrem customers, where you would like to deploy your apps to. Microsoft is clear: just install a DevOps agent on all those customers. The alternative that I try to give is – well, don’t install the devops agent, but just make “External Deployment” possible by following my steps in my previous post, and use the Automation API, just like you would for Business Central SaaS. Sidenote: this API needs to be accessible from your company. So we make sure that the customer enables our IP in their firewall to be able to access this automation API directly from our location.

PowerShell

Since I don’t use PowerShell in DevOps (suplise, suplise), I created an example-script for you in my repo here: https://github.com/waldo1001/Cloud.Ready.Software.PowerShell/blob/master/PSScripts/DevOps/DeployWithAutomationAPI.ps1
Just a few things worth to mention:

  • It’s good to have the app.json and the app-file in the artifacts, to be able to easily get the details about the app being released
  • The publish is just a matter of streaming the file in the request
  • Notice I’m using the “beta” version of the API. I was able to publish the extension with v1.0, but I wasn’t able to get the deployment status – only through the beta-version. Since this is an unsupported way of deployment, I don’t think I can ask Microsoft to help me on this ;-).
  • You would be able to loop the call about the deployment progress, to see if it was successful or not – basically a loop until the status says “completed” or “failed”.

The main part here obviously is the Patch-method to upload the extension. The external deployer you installed will do the rest.. .

ALOps

As said, I don’t use PowerShell anymore, because I’m using ALOps, just because it so much more hassle-free .. and we see that many people are starting to use ALOps as well, also for community purposes. Nice! This means community projects are also getting decent build pipeline in stead of none – and it’s free, so why not ;-).
In ALOps, we created the “ALOps Extension API” step, which you can use to publish an extension through the Automation API OnPrem. The easiest way to do that is by simply introduce one step, and set the “Batch Publish” interaction. Basically it will get all app-files you selected as artifacts, figure out the order to publish for you, and install all artifacts that you have set up in your release-step. Easy peasy. It doesn’t care if it’s in Docker or not .. If the endpoint is available, the external deployer is installed, then your publish will work. Here is the setup in the classic editor which releases 22 apps – one simple step, with only 1 real parameter to fill:

Or in yaml:

steps:
 - task: hodor.hodor-alops.ALOpsExtensionAPI.ALOpsExtensionAPI@1
   displayName: 'Batch Publish'
   inputs:
     interaction: batch
     api_endpoint: 'https://thedestinationbc/bc/api'
     authentication: windows 

Microsoft Dynamics 365: 2020 release wave 2 plan

$
0
0

That’s right. It’s time again for the next round of features that Microsoft is planning for the next major release. It’s weird this time, lacking most info from conferences .. the kind of “silent” release of Wave 1 .. it’s almost like flying blind. Although, there is a crapload of information online. And of course, don’t forget Microsoft’s Virtual Conference from June 3rd. 

Since I’m still focusing on Business Central – I’m only going to cover that part .. but do know that the entire “Dynamics 365” stack has a release for Wave 2.   Business central-related information can be found here: https://docs.microsoft.com/en-us/dynamics365-release-plan/2020wave2/smb/dynamics365-business-central/planned-features

As it doesn’t make sense to just name all features (as they are all listed on the link above), I’m just going to talk again about the features I’m looking forward to (and why) – and the ones that I’m kind of less looking forward to.

What am I looking forward to?

As always – most probably this is going to be somewhat tech-focused .. sorry .. I am what I am, I guess.

Service-to-service authentication for Automation APIs

Very much looking forward to that – just because of the possibilities that we’ll have with DevOps, because at this point, supporting a decent release flow in DevOps to an environment that is fully “Multi Factor Authentication” – well – that’s a challenge. For me, this has a very high priority.

Support for an unlimited number of production and sandbox environments

Today, business can only be in three countries, because we can only create 3 production environments. That obviously doesn’t make sense – so absolute a good thing from Microsoft to open this up! Next to that…

Business Central Company Hub extension

That sounds just perfect! It seems they are really taking into account that switching companies is not a “per tenant” kind of thing, but really should be seen across multipole tenants.

It seems it’s going to be built into the application, within a role center of a task page.  At some point, Arend-Jan came with the idea to put it in the title bar above Business Central like this:

Image

Really neat idea that I support 100% :-). As long as it would be across multiple tenants/localizations .. :-). May be as an extension on the Company Hub? Who knows.. . Any solution, I’m looking forward to!

I couldn’t find the extension in the insider-builds – so nothing to show yet.. .

Business Central in Microsoft Teams

Now, doesn’t THAT sound cool? Because of the COVID-19 happenings, our company – like many other companies out ther – has been using Teams a lot more than they were used to. And the more I set up Teams, the more I see little integrations with Business Central could be really useful!

What exactly they are envisioning here, I don’t know, but the ability to enter timesheets, look up contact information to start a chat or call or invite or… . Yeah – there are a lot of integration-scenarios that would be really interesting.. .

Common Data Service virtual entities

I’m not that much into the Power-stuff (fluff?) just yet, but I can imagine that if I would be able to expose my own customizations, or any not out-of-the-box entities to CDS, that it would be possible to implement a lot more with Power Apps and other services that connect to the CDS entities.

Performance Regression and Application Benchmark tools

One of the things we are pursuing is the ability for DevOps to “notice” that things are getting slower. This means that we should be able to “benchmark” our solution somehow. So I’m looking forward diving into these tools to see if they can help us achieve that goal!

Pages with FactBoxes are more responsive
Role Centers open faster

These are a few changes in terms of client performance – and what’s not to like about that ;-). I have been clicking through the client, and it definitely isn’t slower ;-). I also read somewhere that caching of the design of the pages is done much smarter .. even across sessions, but I didn’t seem to find anything that relates to that statement here on the list.

On-demand joining of companion tables

So so important.  Do you remember James Crowter’s post on Table Extensions?  Well, one of the problems is that it’s always joining these companion tables.  I truly believe this can have a major impact on performance if done well.   

Restoring environments to a point in time in the past

I have been advocating strongly against “debug in live” – well, this is one step closer to debugging with live data, but not in the production environment. Also this is a major step forward for anyone supporting Business Central SaaS!

Attach to user session when debugging in sandbox

Sandboxes are sometimes used as User Acceptance Test environments. In that case, multiple users are testing not-yet-released software, and finally, we will be able to debug their sessions to see what they are hitting.

Debug extension installation and upgrade code

Finally! I have been doing a major redesign of our product, and would have really enjoyed this ability ;-). Nevertheless, I’m very glad it’s finally coming! No idea how it will work, but probably very easy ;-).

What am I not looking forward to?

Well, this section is not really the things I don’t like, but rather the things I wasn’t really looking forward to as a partner/customer/.. . I don’t know if it makes any sense to make that into a separate section .. but then again .. why not. It actually all started with something that I really really hated in one of the previous releases: the ability to go hybrid / customize the Base App. And I kept the section ever since ;-). So .. this is the rest of the list of features we can expect:

Administration

Application

Migrations to Business Central Online

Modern Clients

Seemless Service

General

I have the feeling not everything is included in this list, honestly. There isn’t much mentioned on VSCode-level, while we know there is going to be quite some work in the “WITH” area .. . And we expect to have “pragmas” in code available in the next release as well – or so I understood. That’s just a couple of things you could see in the session “Interfaces and extensibility: Writing extensible and change-resilient code” session of the recent Virtual Conference of Microsoft. 

Installing a DevOps Agent (with Docker) with the most chance of success

$
0
0

You might have read my previous blog on DevOps build agents. Since then, I’ve been quite busy with DevOps – and especially with ALOps. And I had to conclude that one big bottleneck keeps being the same: a decent (stable) installation of a DevOps Build server that supports Docker with the images from Microsoft. Or in many cases: a decent build agent that supports Docker – not even having to do anything with the images from Microsoft.
You probably have read about Microsoft’s new approach on providing images being: Microsoft is not going to provide you any images any more, but will help you in creating your own images – all with navcontainerhelper. The underlying reason is actually exactly the same: something needed to change to make “working with BC on Docker” more stable.

Back to Build Agents

In many support cases, I had to refer back to the one solution: “run your Docker images with Hyper-V isolation”. While that solved the majority of the problems (anything regarding alc.exe (compile) and finsql.exe (import objects)) .. in some cases, it wasn’t solving anything, which only has one conclusion: it’s your infrastructure: version of windows and/or how you installed everything.

So .. that made me conclude that it might be interesting to share with you a workflow that – in some perspective doesn’t make any sense – but does solve the majority of the unexplainable problems with using Docker on a Build Server for AL Development :-).

Step 1 – Install Windows Server 2019

We have best results with Windows Server 2019 as it’s more stable, and is able to use the smaller images for Docker.

Step 2 – Full windows updates

Very important: don’t combine docker/windows updates and such. First, install ALL windows updates and then reboot the server. Don’t forget to reboot the server after ONLY installing all Windows updates.

Step 3 – Install the necessary windows features

So, all windows updates have applied and you have restarted – time to add the components that are necessary for Docker. With this PowerShell script, you can do just that:

Install-WindowsFeature Hyper-V, Containers -Restart

You see – again, you need to restart after you did this! Very important!

Step 4 – Install Docker

You can also install Docker with a script:

Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Confirm:$false -Force
Install-Module DockerProvider -Confirm:$false -Force
Install-Package Docker -RequiredVersion 19.03.2 -ProviderName DockerProvider -Confirm:$false -Force
  

You see, we refer to a specific version of Docker. We noticed not all versions of Docker are stable – this one is, and we always try to test a certain version (with the option to roll back), in stead of just applying all new updates automatically. For a build agent, we just need a working docker, not an up-to-date Docker ;-).

Step 5 – The funky part: remove the “Containers” feature

What? Are you serious? Well .. Yes. Now, remove the Containers feature with this script and – very important – restart the server again!

Uninstall-WindowsFeature Containers

Restart-Computer -Force:$true -Confirm:$false

Step 6 – Re-install the “Containers” feature

With a very simpilar script:

Install-WindowsFeature Containers 
Restart-Computer -Force:$true -Confirm:$false

I can’t explain why these last two steps are necessary – but it seems the installation of Docker messes up something in the Containers-feature, that – in some cases – needs to be restored.. . Again, don’t forget to restart your server!

Step 7 – Disable Windows Updates

As Windows updates can terribly mess up the stability of your Build Agent, I always advice to disable them. When we want to apply windows updates, what we do, is just execute the entire process described above again! Yes ineed .. again!

That’s it!

If you ask yourself – is all this still necessary when we moved to the new way to work with Docker: when we build our own images and such. Well – I don’t know, but one thing I do know: the problems we have had to solve were not all related to the Business Central Images – but some just also regarding just “Docker” and the way Docker was talking to Windows .. (or so we assumed). So I guess it can’t hurt to try to find a way to setup your build servers that way that you know it’s just going to work right away.. . And that’s all what I tried to do here ;-).


Installing a DevOps Agent (with Docker) with the most chance of success

$
0
0

You might have read my previous blog on DevOps build agents. Since then, I’ve been quite busy with DevOps – and especially with ALOps. And I had to conclude that one big bottleneck keeps being the same: a decent (stable) installation of a DevOps Build server that supports Docker with the images from Microsoft. Or in many cases: a decent build agent that supports Docker – not even having to do anything with the images from Microsoft.
You probably have read about Microsoft’s new approach on providing images being: Microsoft is not going to provide you any images any more, but will help you in creating your own images – all with navcontainerhelper. The underlying reason is actually exactly the same: something needed to change to make “working with BC on Docker” more stable.

Back to Build Agents

In many support cases, I had to refer back to the one solution: “run your Docker images with Hyper-V isolation”. While that solved the majority of the problems (anything regarding alc.exe (compile) and finsql.exe (import objects)) .. in some cases, it wasn’t solving anything, which only has one conclusion: it’s your infrastructure: version of windows and/or how you installed everything.

So .. that made me conclude that it might be interesting to share with you a workflow that – in some perspective doesn’t make any sense – but does solve the majority of the unexplainable problems with using Docker on a Build Server for AL Development :-).

Step 1 – Install Windows Server 2019

We have best results with Windows Server 2019 as it’s more stable, and is able to use the smaller images for Docker.

Step 2 – Full windows updates

Very important: don’t combine docker/windows updates and such. First, install ALL windows updates and then reboot the server. Don’t forget to reboot the server after ONLY installing all Windows updates.

Step 3 – Install the necessary windows features

So, all windows updates have applied and you have restarted – time to add the components that are necessary for Docker. With this PowerShell script, you can do just that:

Install-WindowsFeature Hyper-V, Containers -Restart

You see – again, you need to restart after you did this! Very important!

Step 4 – Install Docker

You can also install Docker with a script:

Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Confirm:$false -Force
Install-Module DockerProvider -Confirm:$false -Force
Install-Package Docker -RequiredVersion 19.03.2 -ProviderName DockerProvider -Confirm:$false -Force
  

You see, we refer to a specific version of Docker. We noticed not all versions of Docker are stable – this one is, and we always try to test a certain version (with the option to roll back), in stead of just applying all new updates automatically. For a build agent, we just need a working docker, not an up-to-date Docker ;-).

Step 5 – The funky part: remove the “Containers” feature

What? Are you serious? Well .. Yes. Now, remove the Containers feature with this script and – very important – restart the server again!

Uninstall-WindowsFeature Containers

Restart-Computer -Force:$true -Confirm:$false

Step 6 – Re-install the “Containers” feature

With a very simpilar script:

Install-WindowsFeature Containers 
Restart-Computer -Force:$true -Confirm:$false

I can’t explain why these last two steps are necessary – but it seems the installation of Docker messes up something in the Containers-feature, that – in some cases – needs to be restored.. . Again, don’t forget to restart your server!

Step 7 – Disable Windows Updates

As Windows updates can terribly mess up the stability of your Build Agent, I always advice to disable them. When we want to apply windows updates, what we do, is just execute the entire process described above again! Yes ineed .. again!

That’s it!

If you ask yourself – is all this still necessary when we moved to the new way to work with Docker: when we build our own images and such. Well – I don’t know, but one thing I do know: the problems we have had to solve were not all related to the Business Central Images – but some just also regarding just “Docker” and the way Docker was talking to Windows .. (or so we assumed). So I guess it can’t hurt to try to find a way to setup your build servers that way that you know it’s just going to work right away.. . And that’s all what I tried to do here ;-).

New Upcoming Conference: DynamicsCon

$
0
0

I just wanted to raise some attention to a new Conference in town: DynamicsCon.

Quite interesting, because it’s perfectly aligned with the current world situation regarding COVID-19 issues: it’s a virtual event .. and it’s free! I’m not saying I prefer virtual events. I don’t. But given the circumstances, I guess it makes sense – and some advantages as well: you will be able to see all content, all sessions are pre-recorded (which means: demos will work ;-)), and you can do it within your living room without losing any time on traveling.

Now, the committee is handling this really well: they have been calling for speakers for a while, and many people reacted. Really anyone could submit session topics to present. As I did as well (you might have figured out already I do like to do this kind of stuff  ). So how do they pick the topics/speakers? Well, anyone who registers can can vote for sessions!

So, please if you didn’t register yet: do so now, and until August 1st (that’s not far out!), you can help the committee pick the topics most people want to see during the conference. The most votes will be picked! I’m not going to advertise my sessions – just pick based on topics. That makes most sense!

Some highlights on the conference:

  • It’s free
  • It’s virtual
  • It’s not just for Business Central. These are the tracks:
    • 365 Power Platform
    • 365 Finance & Operations
    • 365 Customer Engagement
    • 365 Business Central
  • There will be Q&A panels during the conference
  • Recorded sessions which will end up on YouTube!

Date
September 9-10

Using DevOps Agent for prepping your Docker Images

$
0
0

I have yet another option for you that might be interesting for you to handle the artifacts that Microsoft (Freddy) is providing in stead of actual Docker Images on a Docker Registry.

What changed?

Well, this shouldn’t be new to you anymore. You must have read the numerous blogposts from Freddy announcing a new way of working with Docker. No? You can find everything on his blog.
Let me try to summarize all of this in a few sentences:
Microsoft can’t just keep providing you the Docker images like they have beein doing. With all the versions, localizations .. and mainly the countless different hosts (Windows Server, Win 10, Windows updates – in any combination) .. Microsoft simply wasn’t able to upkeep a stable and continuous way to provide all the images to us.
So – things needed to change: in stead of providing images, Microsoft is now providing “artifacts” (let’s call them “BC Installation Files”) that we can download and use to build our own Docker images. So .. long story short .. we need to build our own images.
Now, Freddy wouldn’t be Freddy if he wouldn’t make it as easy as at all possible for us. We’re all familiar with NAVContainerHelper – well, the same library has now been renamed to “BcContainerHelper”, and contains the toolset we need to build our images.

What does this mean for DevOps?

Well – lots of your Docker-related pipelines were probably going to download an image, and using that image to build a container. In this case, you’ll not download an image, but simply see if it already exists. And if not, build an image, and afterwards build a container from it, which you can use for your build pipeline in DevOps.
Now, while BCContainerHelper has a built in caching-mechanisme in the “New-BCContainer” cmdlet .. I was trying to find a way to have stable build-timings .. together with “not having to build an image during a build of AL code”. And there is a simple solution for that…

Schedule a build pipeline to build your Docker Images at night

Simple, isn’t it :-). There are only a few steps to take into account:

  1. Build a yaml that will build all images you need
  2. Create a new Build pipeline based on that yaml
  3. Schedule it every night (or every week – whatever works for you)

As an example, I built this in a public DevOps project where I have a few DevOps examples. Here are some links:

The yaml

Obviously, the yaml is the main component here. And you’ll see that I made it as readable as possible:

name: Build Docker Images

pool: WaldoHetzner

variables:
  - group: Secrets
  - name: DockerImageName.current
    value: bccurrent
  - name: DockerImageName.insider
    value: bcinsider
  - name: DockerImageSpecificVersion
    value: '16.4'
  - name: DockerArtifactRetentionDays
    value: 7

steps:
# Update BcContainerHelper
- task: PowerShell@2
  displayName: Install/Update BcContainerHelper
  inputs:
    targetType: 'inline'
    script: |
      [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor [System.Net.SecurityProtocolType]::Tls12
      install-module BcContainerHelper -verbose -force
      Import-Module bccontainerhelper

- task: PowerShell@2
  displayName: Flush Artifact Cache
  inputs:
    targetType: 'inline'
    script: |
      Flush-ContainerHelperCache -cache bcartifacts -keepDays $(DockerArtifactRetentionDays)

# W1
- task: PowerShell@2
  displayName: Creating W1 Image
  inputs:
    targetType: 'inline'
    script: |
      $artifactUrl = Get-BCArtifactUrl
      New-BcImage -artifactUrl $artifactUrl -imageName $(DockerImageName.current):w1-latest

# Belgium specific version
- task: PowerShell@2
  displayName: Creating BE Image
  inputs:
    targetType: 'inline'
    script: |
      $artifactUrl = Get-BCArtifactUrl -country be -version $(DockerImageSpecificVersion)
      New-BcImage -artifactUrl $artifactUrl -imageName $(DockerImageName.current):be-$(DockerImageSpecificVersion)

# Belgium latest
- task: PowerShell@2
  displayName: Creating BE Image
  inputs:
    targetType: 'inline'
    script: |
      $artifactUrl = Get-BCArtifactUrl -country be
      New-BcImage -artifactUrl $artifactUrl -imageName $(DockerImageName.current):be-latest

# Belgium - Insider Next Minor
- task: PowerShell@2
  displayName: Creating BE Image (insider)
  inputs:
    targetType: 'inline'
    script: |
      $artifactUrl = Get-BCArtifactUrl -country be -select SecondToLastMajor -storageAccount bcinsider -sasToken "$(bc.insider.sasToken)"
      New-BcImage -artifactUrl $artifactUrl -imageName $(DockerImageName.insider):be-nextminor

# Belgium - Insider Next Major
- task: PowerShell@2
  displayName: Creating BE Image (insider)
  inputs:
    targetType: 'inline'
    script: |
      $artifactUrl = Get-bcartifacturl -country be -select Latest -storageAccount bcinsider -sasToken "$(bc.insider.sasToken)"
      New-BcImage -artifactUrl $artifactUrl -imageName $(DockerImageName.insider):be-nextmajor

# Images
- task: PowerShell@2
  displayName: Docker Images Info
  inputs:
    targetType: 'inline'
    script: |
      docker images 

Some words of explanation:

  • Pool: this defines that pool where I will execute it. I know this will be executed on one DevOps build agent. This is important: as such, if you have multiple agents in a pool, you actually need to make sure this yaml is executed on all agents (because you might need the Docker images on all agents). Yet, this indicates that when you work with multiple build agents, this might not be the best approach.. .
  • Variables: I used a variable group here to share the sasToken as a secret variable over (possibly) multiple pipelines. The rest of the variables are quite straight forward: I’m not using the “automatic” naming convention from the BcContainerHelper, but I’m using my own. Not really a reason for doing that – it just makes a bit more sense for me ;-).
  • Steps: I’ll first install (of upgrade) the BcContainerHelper on my DevOps Agent and flush the artifact cache if too old (7 days retention). Next, I’m simply using the BcContainerHelper to create all images that I will need for my configured pipelines. You see that I have an example for:
    • A specific version
    • A latest current release version
    • A next minor (the next CU update)
    • A next major (the next major version of BC)

Schedule the pipeline to run regularly

Creating a pipeline based on a yaml is easy – but scheduling is quite tricky. Now, there might be a better way to do it – but this is how I have been doing it for years now:

1 – When you edit the pipeline, you can click the tree dots on the top right corner, and click “Triggers”

In the Triggers-tab, override and disable CI (you don’t want to run this pipeline every time a commit is being pushed)

Then, set up a schedule that suits you to run this pipeline, like:

And that’s it!

ALOps

If you’re an ALOps user, this would be a way to use the artifacts today. Simply build your images, and use it with the ALOps steps as you’re used to.

We ARE trying to up the game a bit, and also make it possible to do this inline in the build (in the most convenient way imaginable), because we see it necessary for people that are using a variety of build agents, which simply can’t be scheduled (as they are part of a pool). More about that soon!

Using Microsoft Dynamics 365 Business Central Artifacts to get to the source code of the default apps

$
0
0

A question I get a lot – especially from people that come from C/AL, and only take their first steps into AL – is: How do I get to Microsoft’s source code of the BaseApp (and other)?
Well, there are multiple ways, really. You can download symbols, and unpack the symbols. You can download the DVD and get to the code on the DVD, or…

You can simply download the artifacts

And with “the artifacts”, I mean the artifacts that are used to build your Docker images.
If you’re already building your Docker containers based on the artifacts, you probably already have them on your system! If not, you can still make them available, even without having to use Docker! Let’s see how that goes..
You might have heard about the command “Get-BCArtifactUrl” CmdLet that I pullrequested to the BcContainerHelper. What I tried to achieve is some kind of easier way to get to any version of Business Central: by enlisting all possibilities, and by giving a somewhat easier way to filter them. After many improvements from Freddy, now you have a way to easily get to the url of any BC Artifact that Microsoft makes available.
The module also contains a way to download the artifacts with the “Download-Artifacts” CmdLet. So – you can easily get to the url, and you have a cmdlet to download – let’s do that! (if you haven’t got BcContainerHelper, get it first!):

Download-Artifacts -artifactUrl (Get-BCArtifactUrl) -includePlatform  

It will download the Artifacts to the folder ” C:\bcartifacts.cache” by default (unless you set up another path). In that folder, you’ll find all AL sources. A few examples:

The AL Base App: C:\bcartifacts.cache\sandbox\*version*\platform\Applications\BaseApp\Source

The Test-Apps: C:\bcartifacts.cache\sandbox\*version*\platform\Applications\BaseApp\Test

I always work with the Docker containers, so automatically, I have the sources of the exact versions of BC on my own machine whenever I need it. But if you’re not working with it, or you work with a centralized docker system (so you don’t have anything local) .. now you know an alternative way to get to the sources ;-).

Use Azure KeyVault in AzureDevops for sharing licensing and other secrets

$
0
0

You are probably aware on how “secrets” work in AzureDevops. In a way, it’s simple: you can create variables, and store the value of the variable as a secret or not, simply by tapping the “lock” when creating a variable.

To share variables over multiple repos, you can create a variable group, and use that variable group in multiple pipelines.

Quite Easy! But …

Thing is – out-of-the-box variable-definition in DevOps – as far as I know – is “just” on project-level. We can define variables on a pipeline, we can pass to templates, we can create “global” variables and such … but sometimes, you need to be able to share a (secret) value, like license-key, over about all your projects. Or even across multiple DevOps organizations – however you chose to set it up.
Many partners have 1 DEV license key that expires every 90 days, so you might want to be able to share that license key over all your projects. The goal is: when you have a new key, there is just one place to change, and all your pipelines will keep running.

How do I share Secret variables over multiple projects?

Let me share you a simple way to do that, but first a disclaimer: it could very well be that I’m not aware of a built-in DevOps option to do this. Please let me know in the comments if that’s the case.

Step 1: Set up an Azure Key Vault in the Azure Portal

In Azure (yes, you’ll need access to the Azure Portal), you have “Azure Key Vault”.

Just create a new Key Vault:

Step 2: Create Secrets

Once you created your vault, you can simply navigate to it..

And start to create secrets:

As you can see, it’s simple: just a key/value pair basically:

The result is simply a list of secrets that you have now at your disposal.

To continue, let’s go back to DevOps…

Step 3: Create a variable group

As you might already know, variable groups can be linked to secrets in an Azure Key Vault. Since these are all secrets that we want to manage on a “high level”, it makes sense to take the highest level we can to manage variables in DevOps, and that’s: Variable Groups.

Step 4: Link it with Azure Keyvault

Make sure you link it with your Azure Key Vault (and Autorize the subscription, and the vault if necessary).

Done forget to add all secrets you want to make available in this project. By default, none of the secrets will be linked, you need “Add” them yourself!

Save, and done! Now, you will be able to …

Step 5: Use it in your pipelines

Here are a few examples on how to link it in your pipelines

And use it:

Do know, when running the pipeline, you might have to give access for this service connection. Simply permit it and run it – you need to do this only once.

If you ever want to delete/disable access to this subscription, do know it has basically created a service connection, which you can find in the project settings:

Just after I wrote this post, I happened to find this one: https://zimmergren.net/using-azure-key-vault-secrets-from-azure-devops-pipeline/ . Definitely worth a read, as it drills a bit more into the security considerations.. .

You wonder how? The answer is “DevOps”!

$
0
0

You must have seen this blog post from Microsoft: Maintain AppSource apps and per-tenant extensions in Dynamics 365 Business Central (online)

And/or this blog of AJ: Business Central App maintenance policy tightened

And/or this post from James: SaaS enables progress, why block it?

If you didn’t, please do. Because as a Microsoft Dynamics 365 Business Central partner, your job does not end with “being able to implement some customizations at a customer”. No . When you create apps – these apps will live at a customer that most probably will have continuous updates. And to quote Microsoft:

“It is your responsibility for continuously aligning the added code to Microsoft’s release rhythm that is expected for all Business Central online customers”.

Kurt Juvyns– Microsoft

Make no mistake! Don’t believe in the fairy tales of some people that Microsoft will/should make sure your code will work forever. No, it’s your code, your responsibility. Like phone-manufacturers change OS and Sizes, the apps on it need to either follow or be abandoned.

Microsoft refers to a page on docs: Maintain AppSource Apps and Per-Tenant Extensions in Business Central Online, with resources like release plans, access to pre-release, deprecation information, training and coaching information, … . And – a very clear warning:

If publishers lack to keep their code updatable, they risk that ultimately their apps or PTEs will be removed from the customers tenant, and this will most likely result in important data not being captured as it should. For apps, this also means removal from the marketplace.

Microsoft (Docs)

The article also explains that they will do what they can to inform the right parties when a problem is to be expected. I don’t want to put it here in the blogpost what they do and how frequently .. just check it out on Microsoft Docs– because that will be maintained.

Reading through the article, I was like .. uhm … hello .. didn’t you forget a chapter? Shouldn’t you include a “how to do this?” or “Best Practices” kind of chapter? I was actually quite disappointed it didn’t mention one single word about the “how”. Well .. if they would have .. I would have at least found the word …

DevOps

Let it be no surprise that whatever Microsoft just “announced” isn’t really a surprise. It’s rather a “confirmation” than it is an “announcement”. But putting something in a contract is one thing. Dealing with it on an operational level is another! And what I (and not only me ..) have been screaming from the rooftops for the last so-many months – no – years, is exactly that: “DevOps is going to be key”!
Honestly, since 3 years now, I have not seen any way around DevOps. Questions like:

  • How will we work in teams?
  • How can we contribute to the same codebase .. and keep that codebase stable?
  • How will I be notified when my code won’t work against the next version – minor or major?
  • How will I deploy all these dependent apps?
  • How will I keep track of dependencies?
  • How will I maintain “breaking changes”?
  • How can I prepare myself against a next version, so that when it’s release, I have my own app ready that same day?

All these, and many more challenges, have had one simple answer: Microsoft Azure DevOps. And it’s time every single Microsoft Dynamics 365 Business Central partner is not only asking himself those questions – but also start to take action in answering them for their company and dealing with it. I can see only one reason why Microsoft is writing the article above .. and is because they notice that (some) partners do NOT take up that responsibility.. .

I’m serious. We, as a community, make the name of Business Central. If we f..mess up, we mess up the “Business Central” name. It’s as simple as that. Customers will not say “that app sucks” or “the partner sucks” .. customers will say “Business Central sucks”. And it doesn’t. Business Central rules! Or as Steve Endow would say: Business Cenrtral Is Amazing! It makes all the sense in the world that we do all we can to be as good as we can.

Starting with DevOps

The general Business Central partner might not be familiar with DevOps – we didn’t really use it with C/SIDE, did we? It’s going to take an effort. Sure. So let me give you a few resources, besides the many coaching-possibilities that Microsoft has in their “Ready To Go” program.
I really liked this book: DevOps for Dummiesfrom Emily Freeman. And today, I just learned that BC MVP Stefano Demiliani also wrote a book on DevOps. I have no idea if it’s good – but I can’t imagine it isn’t ;-). I’m buying for sure!
If you look more to AL, there are people in the community that can definitely help you. I know we have Soren, Kamil, Gunnarand Lucthat have been advocating “DevOps” and “Build Pipelines” for a long time. Just watch their videos on youtube from NAVTechDays. You have blogs from Michael Megel, and Tobias Fenster is also diving into making DevOps much more approachable for all partners!
Then you have me :-). I have been advocating DevOps so much the past couple of years. So. Much. And I’m still doing that by an occasional virtual training(thanks to COVID-19) and sessions on (virtual) conferences. A few years ago, I had a session on Directions US, and I had many requests like: hey, can you please make your software available to us .. which even resulted in tooling that you can use now, right within DevOps. But, that’s not the only tooling you can use – also Freddymaintains the BcContainerHelper, which can be used to create and maintain your apps in DevOp as well! Just follow his blog here.

Conclusion

So, about this article from Microsoft, if you have any questions about the “how” – just answer them with “DevOps” ;-). There is absolutely not a single reason that any partner that creates any kind of apps for Microsoft Dynamics 365 Business Central .. to not try to make its life as easy as at all possible. And a good start there, is “Microsoft Azure DevOps”. But that’s just my opinion ;-).

Microsoft Dynamics 365 Business Central 2020 release wave 2 is Released!

$
0
0

Just a small reminder for you that yet another major release has been thrown our direction: v17 aka “Business Central 2020 Release Wave 2“. Old news, I know. But I blame the pandemic ;-).

I already blogged about this upcoming release, and basically the list is quite the same.
In fact, besides the official Microsoft Docs documentation, Natalie Karolak shared a smart way to filter for all new features that came out with this Wave 2 release on Microsoft Docs:
https://docs.microsoft.com/en-us/search/?terms=%22APPLIES%20TO%3A%20Business%20Central%202020%20release%20wave%202%20and%20later%22&scope=BusinessCentral&category=Documentation

And there are a few features that I really would like to emphasize, because I think they didn’t get too much attention before – and some of them came as a surprise to me:

New TableType property
Now you will be actually make sure certain tables are used as temporary tables. Cool!

Creating Custom Telemetry Traces for Application Insights Monitoring
This is one that we will use a lot, I think. I’m going to look into whether this is interesting to add in my snippets, to “by default” have logging for each method.

Using Partial Records
A way to get rid of always reading all fields – which also will have a positive impact on table extensions as well – remember James Crowter’s blog? Now it’ll be a matter of getting these best practices to the common development principles of all developers.. .

Using Key Vault Secrets in Business Central Extensions
Interesting! Especially when you have secrets that you need to manage over multiple extensions – you can now set it up in the app.json, let all tenants, customers, apps, whatever, point to one or more keyvaults, and manage the secrets centrally. Super!

New system fields: Data audit fields
Finally we have “Created At/By” fields and “Modified At/By” fields out-of-the-box for every table/record! Managed by the system. This appeared to be already in my list on my previous blog – but for some reason, I didn’t catch it as being interesting  . But it sure is, in fact!

You might wonder, what is Microsoft doing to announce and present this new release? Well, like in Spring, we’ll have yet another…

Virtual Conference

Makes sense, obviously! But in some way, I couldn’t really find a lot of attention to it. I asked around in my company, and nobody really knew that it was happening. And not only that – it’s happening soon! October 21st! So be fast, and make sure to register (for free) here: http://aka.ms/MSDyn365BCLaunchEvent
And this is the agenda:

Image

Erik Hougaard

I’d like to conclude with the video of Erik Hougaard about the “what’s new with AL in v17”. An interesting approach to find out about new features ;-).

If you’re not subscribed to his channel yet – well – it’s about time ;-).

As said, this was going to be a small announcement – enjoy this new release! I’m already enjoying it, by fixing all the next CodeCop rules in the compiler (and that’s not just the “with”-stuff)  .. resulting in about +4500 changed files.. what can I say .. .


Upgrade to Business Central V17 (part 1) – The Workflow

$
0
0

Recently, we have been going through upgrading our 65 apps to the newest release (v17). You might wonder: upgrade? Wasn’t this supposed to be seemless?

Well, let me explain what we did, how we handle stuff internally – and then maybe it does make sense to you why we take the steps we took for upgrades like this.

DevOps

It seems I can’t stop talking about DevOps ;-). Well, it’s the one thing that “keeps us safe” regarding:

  • Code quality
  • Succeeding tests
  • Breaking changes
  • Conflicting number ranges
  • … (and SO MUCH MORE)

You have to understand, in a big development team, you can’t just set it on hold, you can’t just let everyone contribute to the same issue (in this case: preparing your code for v17), you will probably continue development while somebody is prepping the code for the next version.

That’s where branching comes in (we just created a “v17prep” branch for every repo). Sure, we continued development, but once in a while, we basically just merged the master-version into the v17prep branch.

Now, with our DevOps pipelines, we want to preserve code quality, and some of the tools we use for that, are the code analyzers that are provided by Microsoft. Basically: we don’t accept codecop-warnings. So, we try to keep the code as clean as possible, partly by complying with the CodeCops (most of them) that Microsoft comes up with. The pipelines basically fails from the moment there is a warning.. .

I absolutely love this. It is a way to keep the code cleaner than we were able to with C/SIDE. And the developer is perfectly able to act on these failures, quickly and easily, because they get feedback from the pipelines.

But – it comes with challenges as well:

  • CodeCops are added by Microsoft with new compilers in VSCode. They are automatically being installed in VSCode. So it could very well happen that on a given morning, lots of new failures pop up your development environment.
  • CodeCops are added in new versions of BC – so pipelines begin to fail from the moment pipelines are run against a higher version. Since we are upgrading … you feel what will happen, right? ;-).

Next, obviously, we have “automated testing” also deeply rooted in our DevOps: not a single PullRequest is able to be merged with the stable branch, if not all tests have run successfully. When implementing upgrades, I can promise you, there will be breaking tests (tests will simply fail for numerous reasons – Microsoft changed behaviour, your tests fails) – and if not, well, may be you didn’t have enough tests yet ;-): the more tests you have, the more likely one will fail because of an upgrade.
And that’s totally ok! Really. This is one of the reasons why we have tests, right? To know whether an upgrade was successful? So, absolutely something the pipeline will help us with during an upgrade process!

Yet another check, and definitely not less important: the “breaking change” check. The check that prevents us to allow any code that is breaking against a previous version of our app. It’s easy:

  • We download the previous version of the app from the previous successful (and releasable) CI Pipeline.
  • We install it in our pipeline on a docker container
  • Then we compile and install the new version of our app
  • If this works: all is well, if not, probably it’s because of a breaking change which we need to fix (Tip: using the “force”, is NOT a fix .. It’s causing deployment problems that you want to manually manage, trust me: don’t build in a default “force deploy” in a Release Pipeline ending up with unmanaged data loss sooner or later).

That’s the breaking change– but do know that in that same pipeline, we also run tests. And in order to do that, we need a container that has my app and my test app in it. Andin order to do THAT, we need all dependent apps in there as well. So, we always:

  • Download all dependent apps from other pipelines – again the previously successful CI Pipeline of the corresponding app.
  • Then install all of them so our new app can be installed having all dependencies in place
  • If this doesn’t work: that’s probably a breaking dependency, which we’ll have to fix in the dependent app.

A breaking dependency is rather cumbersome to fix:

  • First create a new pullrequest that fixes the dependent app
  • Wait for it to run through a CI pipeline so you have a new build that you can use in all apps that has this one as a dependency
  • The app with the dependency can pick it up in its pipeline

So in other words: it’s a matter of running the pipelines in a specific order, before all apps are again back on track. It’s a lot of manually starting pipelines, waiting, fixing, redo, …

I’m not saying there are different ways to do this, like putting everything in one repository, one pipeline, .. (which also has its caveats), but having all apps in its own repository really works well for us:

  • It lets us handle all apps as individual apps
  • It prevents to make unintentional/unmanagedinterdependencies between apps
  • It lets us easily implement unit tests (test apps without being influenced by other apps being installed)
  • It notifies us from any breaking changes, breaking dependencies, forbidden (unmanaged) dependencies, …

Why am I telling you this? Well, because Microsoft broke a dependency– a classic breaking change, but not in the base app, but in testability .. acceptable by Microsoft because “it’s just a test-framework”, but quite frustrating and labor intensive when this needs to go through a series of DevOps pipelines. The broken dependency was a simple codeunit (Library Variables Storage) that Microsoft moved to a new app.

* BASE - Test ) .alpackages > Microsoft_Library Variable Storage_17.D16244.O.app 
Object type filter 
Id filter, 22, 
Tables 
Pages 
Type 
Codeunit 
131W4 
Name 
Library - Variable Storage

I get why they did it: this is more of a “system library” than a “function library”, and basically the system app needs to be able to get to this, which shouldn’t rely on anything “functional”. So architecturally, I totally understand the change. But .. It’s breaking “pur sang”, and I really hope things like this will be done using obsoletions in stead of just “moving” .. . I’ll explain later what it involved to handle this issue for v17.

Since we want to comply with all codecops, implement all new features with V17, I think I found a method that works for us to be able to work on it, spread over time.

The flow

So, DevOps (and SCM) is going to be the tool(s) that we will use to efficiently and safely tackle these problems.

Step 1 Create branch

I already mentioned this – all preparation can be done in a new branch. Just create a new branch from your stable branch (master?), in which you can do all your preparation jobs. When people are still adding features in the meantime, simply merge the branch again with the new commits from time to time – possibly adding new code that does not comply with the new version anymore – but that should easily be fixed.. .

Step 2 Disable the (new) codecops that cause problems (warning or error)

This step is in fact the time that you buy yourself. You make sure that you still comply with all rules you did not disable, but to start working on all rules that you don’t comply with yet, let’s first disable them, to later enable them one-by-one to have a clear focus when solving them. For us, this meant we added quite a bunch of codecops:

All of which we meant to fix. Some more efficiently than others .. . I wanted to comply with most of them.

Step 3 Make sure it compiles and publishes

It wasn’t “just” codecops that we needed to worry about. As said, there was also a breaking change: the “Library – Variable Storage” codeunit that moved to its own app. Now, lots of our test-apps make use of that codeunit, so we needed to add a dependency in all our test-apps to be able to “just” run our tests against the v17 dev environments:

Step 4 Enable codecop and fix

Up until this point, it only took us about 30 minutes: creating a v17 container, branch, disable codecops .. all find/replace so we efficiently did the same for all apps .. and we were good to go: we had apps (and its test-app) that did NOT show any warning in the problems-window, and that was able to be published to a default V17 container where we were able to run tests. Time to get busy!
To solve the codecops, we simply applied this subflow:

  1. Switch on a rule by removing it from the ruleset-file
  2. Commit
  3. Solve all warnings
  4. Commit
  5. Back to 1

And we did that until all rulesets we wanted to fix, were fixed.

Step 5 Pullrequest

From that moment, we started to pullrequest our apps to the master-branch, to perform our upgrade. Basically, I wanted to have all working pullrequest validation builds, before I started to approve them all to master branches. This was a very tricky thing to do ..well .. this was not possible to do, unfortunately.

Simply said: all apps with dependencies to apps that used the “Library – Variable Storage” codeunit, simply failed, because the dependencies were not there yet in the previous apps, so it simply was not able to deploy them in the pipeline to a b17 container for: Checking breaking changes or installing the dependent apps.

There is always a solution .. Since I don’t want to just abandon DevOps, this is the only way I saw possible:

  • Disable breaking changes in the yaml for this pullrequest. This is obviously not preferable, because despite the MANY changes I did, the pipeline is not going to make sure that I didn’t do any breaking changes.. . Fingers crossed.. .
  • Approve all apps one by one, bottom up (apps with no dependencies first). This way, I make sure there is going to be a master-version of my app available for a next app (WITH the right dependencies), that is dependent of my bottom layered app. So, I had to push 65 pullrequests in the right order. Big downside was that I only see the real pipeline-issues when the pipeline was finally able to download the updated dependent extension. So no way for me to prepare (I couldn’t just let 65 apps build overnight and have an overview in the morning – I could only build the ones that already had all its dependent apps updated with the new dependency to the “Library – Variable Storage” app), and I had to solve things like breaking tests “on the go”. This all makes it very inefficient and time consuming.. . I reported it to Microsoft, and it seems that it makes sense for them to also see test-apps as part of the product, and to not do breaking changes in them anymore either (although I understand this is extra work for them as well… so, fingers crossed).

Some numbers

The preparation took us about 3 days of work: we changed 1992 files of a total of 3804 files, spread over 65 apps. So you could say we touched 50% of the product in 3 days (now you see why I really wanted to have our breaking changes-check as well ;-)?
The last step – the pullrequest – it took us an extra 2 days, which should have been just a matter of approving and fixing failing tests (only 5 tests out of 3305 failed after the conversion).

Any tips on how to solve the codecops?

Yes! But I’ll keep that for the next blogpost (this one is long enough already  ).

Visualize app.json dependencies in VSCode (using GraphViz)

$
0
0

If you’ve been following the latest and greatest from Microsoft Dynamics 365 Business Central, you must be aware about “what’s cooking in Microsoft’s Lab“. In short, Microsoft is working on a possibility to generate a DGML file for your extension that you’re compiling. A DGML file is basically a file that contains all code cross references. Remember “where used” .. well .. that! I can only recommend to watch Vincent‘s session “BCLE237 From the lab: What’s on the drawing board for Dynamics 365 Business Central” from Microsoft’s Virtual Launch Event. You’ll see that you’ll be able to generate an awesome graphical representation of your dependencies:

(sorry for the bad screenshot – please watch the video ;-))

After you have seen that session, you might wonder why I created my own “Dependency Graph”. Well .. you know .. I have been willing to do this for a very long time. Actually ever since I showed our dependency analysis, where we basically created a GraphVis representation of our C/AL Code .. a tool which I shared as well. That was working for C/AL, and I wanted to be able to show a dependency analysis based on the app.json files. Fairly easy to do .. in PowerShell. But .. we have a decent development environment now .. and I already did some minor things in an extension .. so why not …

Visualize app.json dependencies in VSCode using GraphViz

There is not much to explain, really. In my CRS AL Language Extension, I created a new command that you can find in the command palette:

This command will read all app.json files in your workspace (so this function is really useful in a Multi Root workspace) and create a .dot (graphviz) dependency file from it:

It’s a really simple, readable format.
Now, in VSCode, there are extensions that let you build and preview this format. I liked the extension “Graphviz Interactive Preview“. If you have this extension installed, my command will automatically open the preview after generating the graph. You can also do that yourself by:

With something like this as a result:

Settings

I just figured that sometimes you might want to remove a prefix from the names, or not take Microsoft’s apps into account, or not show test-apps, or… . So I decided to create these settings:

  • CRS.DependencyGraph.IncludeTestApps: Whether to include all dependencies to test apps in the Dependency Graph.
  • CRS.DependencyGraph.ExcludeAppNames: List of apps you don’t want in the dependency graph.
  • CRS.DependencyGraph.ExcludePublishers: List of publishers you don’t want in the dependency graph.
  • CRS.DependencyGraph.RemovePrefix: Remove this prefix from the appname in the graph. Remark: this has no influence on the ‘Exclude AppNames’ setting.

So, with these settings:

You can make the above graph easily a bit more readable:

Now, to me, this graph makes all the sense in the world – because I know what these names mean. But please let it loose to your extensions and let me know what you think ;-).

Enjoy!
And I’m looking forward to the DGML abilities and what the community will do with that!

Business Central V17.1 fails your pipeline .. and that’s OK

$
0
0

A few days ago, we upgraded our product from 17.0 to 17.1. We had been looking forward to this release, because we usually never release a product or customer on an RTM release of Business Central. So .. finally 17.1, finally we could continue to upgrade the product and start releasing this to customers before the end of the year.
While this was just a minor upgrade .. things started to fail. We have quite some apps to build, and some of them had many failing tests.

All these failing tests for just a minor upgrade? That never happened before.
What happened? Well …

From Microsoft Dynamics 365 Business Central 17.1, Microsoft enabled upcoming features

You are probably aware of the new “feature” in Business Central, called “Feature Management“. A new functionality that lets users enable upcoming features ahead of time. It indicates which features are enabled, when they would be released, and it gives you the possibility to enable them in a certain database (usually a demo or sandbox to test the feature).
From 17.1, Microsoft enables 5 of these features out-of-the-box in the Cronus database that comes with the DVD or Docker Artifact.

Now, these features is business logic. So, by enabling them, you’re going to enable new business logic. New business logic in an upgraded database means: a difference in behavior. A difference in behavior in a database with a crapload of tests, usually means …

Failing tests

Exactly: DevOps will execute your tests against a new Cronus database where these features are enabled, so your tests will fail, your pullrequests will fail, … basically your developments will come to a halt :-).
This needs immediate attention, because before being able to continue development/testing/.. this needs to get fixed.

My first focus was looking at the cause of these failing tests: “a changed database with new business logic, features that are actually still not really released, just part of the database as an upcoming feature. So, how can I change my docker image to be a correct one with correct enabled features??”. Or in other words: I was looking at docker to solve this problem.

And I was wrong….

It was actually a remark from Nikola Kukrika (Microsoft) on my twitter thread that made me look at it from a different angle. Sure, my tests fail because of the enabled features. But this is actually good and useful information: they tell you the current business logic is not compatible with the upcoming feature, and I should also indicate that in code by disabling the feature during the tests. Doing so, I actually also give myself a “todo-list” and a deadline: all disabled features need to get enabled (or in other words: I need to make my software compatible with the upcoming version of the business logic) – and even more: it will fail again from the moment the features are actually released. So you kind of get warned twice. Looking at it from this angle: you WANT these failing tests during an upgrade.

Luckily, disabling the tests wasn’t so difficult. This is what we did:

And tests are working again :-)!

Run your own AppSource Validation

$
0
0

Quite honestly, I’m fully into the process of getting our apps (about 30 of them) to AppSource. We chose to have OnPrem implementations first, basically to get the experience, and also because it just still sells better in Belgium (immediate market ;-)). Anyway ..
Recently, there was a call with Microsoft with the topic “what to do in order to pass technical validation”. Given my current state – quite interesting to me, obviously ;-). It was a very good presentation from Freddy, with a clear overview of how Microsoft handled it in the past, how it’s handling it now .. and what to expect in the future.
What I was a bit surprised about was that still quite some partners just uploads a version of their app(s), and let Microsoft figure out what the problems are in terms of technical validation .. . And also that some partners had to wait for quite some time before getting feedback. Well, there was quite a clear explanation that one had to do with the other:

We are much faster in passing an app than in failing it.

And that’s normal. The validation is done by running a script. If the script passes, the validation passes. If the script fails .. Microsoft needs to find out why it fails and report that back – which is a manual process. Basically meaning: the more you check yourself, the faster your (and anyone else’s) validation experience will be!

What can you check yourself?

Well – basically everything that needs to be checked – the entire stack. Later, I will tell you how, but first, let’s see what’s so important to check – and apparently forgotten by a lot of people. Let’s start by this easy link: http://aka.ms/CheckBeforeYouSubmit . It ends up on Microsoft Docs explaining the details. During the call on Tuesday, Freddy highlighted a few common things:

AppSourceCop

You need to comply with every single rule in the AppSourceCop . Well – that’s what he told on Tuesday. Today, during a session on BCTechTalk – he corrected this. There is actually a ruleset that they apply when checking the apps, which you can find here (not sure how long the ruleset will be found in that link .. ). So, in short – enable it in your settings!

And if convenient for you, just apply the ruleset I mentioned (I don’t – we simply comply with every single AppSourceCop rule)

Breaking changes

When we think of breaking changes, we think of schema-related breaking changes. But that’s not all. In fact. AppSource-related breaking changes can also be:

  • A renamed procedure
  • A renamed or removed control on a page

So .. there are A LOT more breaking changes when we think in terms of AppSource. In fact, it’s important you make yourself familiar with the settings for the AppSourceCop (aka AppSourceCop.json). At minimum, you should refer to a previous version of your extension. And in order for the compiler to take that into account as well, also provide the app-file of that version.
Here is an example of having the AppSourceCop.json (in the root of the app), the setting to my previous release, and the action released app in a folder on the project.

Note – for the VSCode compiler to work, you might have to copy that app in the symbols-folder. I just like to have it separate, so its intention is very clear (and as you can see, I .gitignore my entire symbols folder).

Affixes

In the screenshot above, you see I also set up an affix. It is important that you reserve your affix at Microsoft (it’s done by email – but I’m reluctant to share emails on a public platform like this .. but look at my resources below, in the documentation you’ll find what mailaddress to contact ;-)), and set it up into the AppSourceCop to make sure you don’t forget ANYTHING to prefix (about anything in extension objects, and about every new object). Small tip – my “CRS AL Language Extension” can help you with the prefixing ;-).
So – set it up in the AppSourceCop.json, and the compiler will make sure you won’t forget!

Code Signing

This is a somewhat trickier thing to check. What do you need to know: for AppSource, you need to codesign your extension with a code signing certificate. The compiler will not sign your app, so this is usually done by a powershell script. But .. that’s not all .. what you should do is also test the resulting (signed) app by publishing and installing it without the “skipverification” check in the publish-commandlet. Don’t forget to check that, because that is the only way to be really sure the codesigning actually was successful!

Publish to all supported countries

Many partners “just” develop against one country localization (like “w1”). But when doing that, it doesn’t mean that your app will be able to deploy against all other countries. So you should also check if your app is able to be published (and even upgraded) against all these localizations that you want to support. PowerShell is your friend: set up a container for the localization, and start publishing your apps!

Name and publisher changes

When you change the name or publisher of your app – if I understood it correctly – at this point, it’s also a breaking change. That might change in the future though .. (only AppId should be leading). The way to check this, is to upgrade your app from your previous version to a new version (so basically: upgrading your app). From VSCode, this once again is difficult to do, other than running a script, which sets up an environment with your previous app, to be able to install to your new app.

How can I do my own validation?

So – a few are easy and configurable in VSCode. For others, you need scripts!
Well, this is a part that is quite interesting for DevOps, if you ask me. Just imagine: a nightly pipeline that tests all the things (and more) above .. which reports back what the current status of your apps are.
And no – you don’t need to create it yourself ;-).

BcContainerHelper

As always – it’s BcContainerHelper to the rescue. Freddy wouldn’t be Freddy to make your life as easy as at all possible. Since the very moment I wrote this post, Freddy released his a function “Run-AlValidation”– specifically meant for validating your app for AppSource. It’s quite a script – I tried to read through it .. wow!
In general (and I’m being VERY general here – you can read the full explanation on Freddy’s blog ;-)), it will:

  • Loop through the countries you specified, and for each country
    • each version you specified
      • Set up a Docker container
      • Install the apps you depend on
      • Run AppSourceCop (basically: compile with AppSourceCop enabled)
      • Install previous version of your app to the environment
      • Upgrade to the new version of your app

As you see, it takes care of all the issues above. If all of this passes .. I think you’re quite good to go and upload your app to Partner Center ;-).
As Freddy explains in his post, it’s still being changed and optimized – so frequently refresh your BcContainerHelper to get the latest and greatest!

DevOps fanciness with ALOps

You might not be using ALOps, still this section might be interesting, as it also explains some neat yaml-stuff

Quite some partners are a bit more demanding in terms of DevOps and development flow optimization. ALOps is one of the tools that make it easy.

So, I started thinking whether this was possible with the building blocks that are already present in ALOps – and taking advantage of the mechanics of DevOps. And of course it is  ! Quite interesting, even!

Because ALOps is a set of DevOps-specific building blocks – it is actually quite possible to configure a set of yaml-files that does quite a good job in validating your code for AppSource.
How it works is that one yaml-file will be there for the pipeline definition, which calls a yaml-template, which has the stages and steps.

Why two yaml files? Well, doing it like that, makes it possible to create a multistage-pipeline, depending on the countries and versions you set up in the (first) yaml: each combination of country and version would be a stage. And .. stages can run in parallel, so as such, this could tremendously speed up your validation process, when you have a pool with multiple agents ;-). When you have to validate against all countries, and 2 versions .. any performance gain is always welcome ;-).

The files, you can find among our examples here in the folder “AppSourceValidation”. You’ll find 2 files: a template, and a pipeline. Just put them both together in your repo (just use a subfolder “AppSourceValidation” or something like that), and in DevOps, refer to the pipeline-yaml when you create a pipeline.

You see it’s quite readable – and the “magic” (if I can call it that) are the two arays “countries” and “selectVersions”. Those will actually cause that multiple stages will be created when running this pipeline. In the template file, you’ll see the loop here:

Now, the individual steps are a different approach than what Freddy does in his pipeline: Freddy works with app-files, I work with your code. But you can see that we can actually create different templates – like one that would for example download artifacts from a release pipeline – or something like that. It’s quite simple to reconfigure, since all is just a matter of compiling code and publishing apps ;-). I might add other template files in here in the future! Just let me know if you’re interested in that.

The outcome is pretty neat. Here is a successful validation of 4 combinations, in a multistage pipeline:

More interesting, are errors. Just the default pipeline feedback gives you all you need:

And remember, these 4 stages could run in parallel, if you have 4 agents in one pool (which is not so uncommon …), all 4 would just run at the same time!

If we all now start validating our own apps, we would tremendously speed up the validation process at Microsoft. So, let’s just do that, will me? ;-).

Resources

Regarding AppSource, there are a few resources that I got on Tuesday that is worth sharing!

http://aka.ms/GetStartedWithApps
http://aka.ms/AppSourceGo
http://aka.ms/CheckBeforeYouSubmit
http://aka.ms/bcyammer

Download all Microsoft Dynamics 365 Business Central Source Code with PowerShell

$
0
0

You might wonder: why would I need this? Why would I need to download source code of Business Central, while I can simply access it through the symbols when I’m working in VSCode – or even better, while I can simply click the symbol, and look at the code from there?
Well …

Searchability

Didn’t you ever wonder: “hey, previous version, this codeunit was still in this app – where is it now”? Or something in the line of: “Where can I find an example of a test-codeunit where they create and post picks”? In the old days, it was easy to get to the source code. It was simply an “export all”. These days – Microsoft’s source code is spread over a multitude of apps. Either as “Application” or as “platform” .. It doesn’t really matter.
Sometimes, it’s just very useful to simply be able to search trough every single letter of source code Microsoft has released as part of a certain version of Business Central. So …

PowerShell to the rescue!

I wrote a little script:

As you can see, I’m using BcContainerHelper to simply:

  • Download the artifacts and its platform
  • For each “.source.zip”-file, I’ll unpack it in a decent destination directory

You can apply “filters” or “excludes” when you’re for example only interested in a portion of the apps – just to speed up the process.

When done, you’ll have a directory (that you configured in the variable “$Destination“). Simply open the required version in VSCode, and you’ll be able to search all files.

As you see in the first line of the script .. you can indicate the exact version of BC by providing the right parameters in the “Get-BCArtifactUrl” CmdLet. More info here: Working with artifacts | Freddys blog
May be one interesting example, you can also do something like this:

To get all Test-apps from the be, nl and w1 localization.

Now – this is just the tip of the iceberg of something that someone else in the community is working on (Stefan Maroń) – which is currently in approval state at Microsoft. Nothing to share just yet – but fingers crossed it’s going to get approved, and let the above just be a completely wasted internet space!

Meanwhile – enjoy!

Viewing all 339 articles
Browse latest View live