Now Wave2 blogging is allowed, there are many topics that I want to share with you .. really, a lot. Time is not on my side – I’m in full preparation for Directions and NAVTechDays, so let’s see.
Today, while working on my SaaS-deployment-pipeline in DevOps with ALOps, I DID find the time to say a few words on one of the topics I’m quite excited about: Shortcutkeys in the webclient, or in other words: full keyboard shortcut support!
In my “real world”, users (yes, I do have customers, although people sometimes seem to be thinking otherwise.. ) have been asking this for quite some time .. and finally I can tell them: it works like it should!
Obviously you already knew about this, because I already announced it here (and specifically here by Microsoft) – just kidding ;-).
How it works?
Very simple: you have a “ShortCutKey” property on action, where you can set the ShortCutKey like this example:
I guess that doesn’t need too much of explanation, does it ;-)? You can simply do it the same for your actions. And I can tell you: it works! :-).
Knowing this, let’s dive a bit into what Microsoft did…
Did Microsoft apply it to the Base Application?
Absolutely. Even more, at the current version (a bit depending on localization), I found 1370 places where Microsoft added a ShortCut! I can only imagine they were able to convert that from the Base Application ;-).
This made me think though – are there THAT many shortcuts we need to start learning by heart?
You can find the output in that same repo, but here is the overview of all shortcuts that I found are implemented:
ShortCutKey
# of Actions
Explanation
Alt+D
356
Dimensions
Ctrl+Delete
1
Delete
Remark: this is only applied on Page“ItemAvailabilitybyTimeline“
Ctrl+F11
17
Reconcile
SplitWhseActivityLine
Ctrl+F7
183
Navigate to “Entries”
Ctrl+F9
43
“Finish”
“Release”
“Release to Ship”
Approve
Release
UnappliedEntries
Ctrl+Right
2
Post
Only on (BC)O365SalesInvoice Page
F7
154
Statistics
F9
101
Post
Return
93
Open
This is an interesting one …
Shift+Ctrl+D
4
Dimensions
Only on these journals:
FinancialJournal
EBPaymentJournal
DomiciliationJournal
ApplyGeneralLedgerEntries
Shift+Ctrl+F
1
SuggestWorksheetLines on CashFlowWorksheet
Shift+Ctrl+F9
1
“Post and Print Put-away” on WarehouseReceipt
Shift+Ctrl+I
92
Item & Tracking Lines
Shift+Ctrl+L
2
Run (All) on Test Suite
Shift+F11
24
Apply Entries
Shift+F7
228
Card / Show Document (why not “Enter”? :-))
Shift+F9
68
Post & Print
As you see, not really THAT many, but very useful nevertheless..
Now obviously the next question:
Can we change a ShortCutKey from within an app?
And yes you can! So, if Microsoft has forgotten any, you can add them! Here is an example:
As you can see, in this case, I didn’t just add a ShortCutKey, but I even changed the “ALT+D” to something else. And it works .. . Not sure we should all be doing this – and also not sure if everyone would do this, what would happen with conflicting shortcutkeys (although, I guess the “first of last one wins”?), but now at least, you know it’s possible!
If you want to play with Wave 2, get started here:
I’m returning after a very interesting workshop on DevOps in Berlin .. . At this moment, I’m wasting my time in the very “attractive” TXL airport because of my delayed flight. And how can I better waste my time than to figure out some stuff regarding Business Central.
Figuring out indeed, because I have barely internet, a crappy seat, nearly no access to food, … so for me this is a matter of bury myself so I don’t have to pay attention to my surroundings ;-). Anyway .. It’s not all that bad .. but a delayed flight is never nice. Anyway…..
opic of today: the new systemId!
While I was converting our app to be able to publish on the Wave 2 release .. this was something that I noticed:
All the “integration id’s” are marked for removal, and – most interesting – will be replaced by “the systemID”. What is that? Google, Microsoft Docs .. none of my conventional resources helped me finding out what SystemID is .. luckily, I did come across some information on yammer by Microsoft ;-)..
RecordIDs
You probably all know RecordIDs, right? A single “value” that referred to a specific record in a specific table. We all used them in generic scenarios, right? Also Microsoft – I don’t know if you know “Record Links”? A system table that stores notes and links to specific records? Well, the link to the record is made through RecordID. We have been using it for years .. . Now, a big downside of using RecordIds was the fact when you would rename the record (one of the fields of the keys), it would change its RecordId as well .. and all of a sudden, you could lose the connection in all tables where you stored that specific ID. Long story short – not ideal for integration or generic scenarios…
Surrogate Keys
And this is where “surrogate keys” of my good friend Soren Klemmensen came into place. He came up with a design pattern (well, I don’t know if he came up with it – but he sure advocated it for a long time) that described how to implement having a dedicated unique key of one field for a record. Basically: add a field in a table, and make sure it has a unique GUID. Make it that all these surrogate keys have the same FieldNo, and you are able to generically access the value of any of the keys for any record.
This is something Microsoft actually implemented themselves. And the code is all over the place. Even still in Wave2, we have the code to fill the “Integration Ids” as they call it. Nice system, but a lot of plumbing needed to make it work. I don’t know if there was a design pattern that described what you needed to do to apply this on your own tables – I never did ;-). But definitely interesting to do for many scenarios. Thing is .. quite a lot of work.
The SystemID
Now, as you got for the first screenshot: Microsoft is abandoning this field 8000 (that so-called “integration id”) – their first implementation of the surrogate keys – and will implement “SystemId” from the platform. Meaning: whatever you do: you will ALWAYS have a key called “systemId” for your table, which is a unique GUID in that table that can identify your record, and will never be changed – even when you would rename your record.
How cool is that! Here is an example of a totally useless table I created to show you that I have the systemId in intellisense:
What can we expect from the systemId?
Well, in my understanding – and quite literally what I got from Microsoft (thanks, Nikola ):
It exists on every record
But not on virtual/system tables (not yet, at least)
You can even set it in rare scenarios where you want to have the same value (e.g. copy from one table to another, upgrade…). Simply assign System Id to the record and do Insert(true,true) – 2x true
There is a new keyword – GetBySystemId to fetch by system id
It is unique per table, not per DB. Customers and items may have same IDs, though is hard if you are not manipulating it yourself, since guids are unique. Let’s say, they are “probably” unique – but on SQL, there is a unique key defined on the field, so only guaranteed per table.
Integration Record is still there, however the Id of the Integration Record matches the SystemId of the main record (Microsoft has code and upgrade in place)
You can only have simple APIs on it (no nesting, like lines). At this point, at least. It should be fixed soon, which is why the APIs are not refactored yet to use SystemId instead of Id.
A few more remarks
IF you would create a field that refers to a systemId, then it makes sense to use the DataClassification “SystemMetadata” for it. Not because I say .. just but because I noticed Microsoft does ;-).
Another not unimportant something I noticed: this is a system-generated field. So if you would need the fieldnumber, you have “recref.SystemIdNo”:
My take on it
From what I understood: there is work to do, but things are looking good:-). In fact, it is exactly what we have been asking for – and Microsoft delivers. Again! Great! I know this will see a lot of use in the (near) future! Within the Base Application, and in lots of apps.
Do know, I didn’t have any documentation about this – so all is based on some small piece of remark on yammer, and things I could see in code… So – if you have anything to add – please don’t hold back ;-). That’s why I have a comment section ;-).
Small message that I
just needed to share :-). As said – I’m
prepping my sessions for Directions, which basically means: I’m spending all my
free time in “Business Central” these days. No printing, hardly any social contacts …
.
And while publishing
to SaaS, I noticed this when I refreshed my page:
That’s right! A new logo, people! (I should say “Icon” actually, but
you know what I mean). Let’s have a
closer look:
Doesn’t look bad!
Not bad at all .. But it does mean I’ll have to redesign my
3D printed logo ;-). And I will … I
so will .. . As I said on twitter
earlier today: I’m so distracted now that I have to first make sure that I
started a new print with a new concept before I can continue prepping my
sessions :-).
Sorry for the
shortness of this blog (may be you like it that way ;-)) – but just a small
reminder for anybody that has been sleeping under a rock for the last couple of
days:
Microsoft Dynamics 365 Business Central 2019 release Wave 2 is released!
All you need to know
is simply quite well documented by Microsoft.
Let me give you a few links:
And yes, that’s right: C/AL is gone, and the RTC is gone as well! But together with that, a lot of goodies are being thrown in our lap! If you want to know more, read the above links, or come to Directions or NAVTechDays and learn from the very people that built it!
It’s a big one – so
build pipelines will break, code needs to be upgraded. I guess it’s time for action ;-).
The better half of
my past week can be summarized best by this oh-so-descriptive-error-message:
Right: a message I
have spent a long time on to find out what was happening – and what caused
it. Multiple days – so let me try to
spare you the pain when you would encounter this error.
(tip: if you don’t
care about the story, just skip to the conclusion ;-)).
History
We are rebuilding
our product to Business Central – and are almost finished. In fact, we have spent about 500 days
building it – and since the recent release of Wave 2, we are fully in the
process of upgrading it – because obviously, since all is extensions (we have a
collection of 12 dependent extensions), that should be easy. (think again – Wave 2 came with a lot of breaking
changes… but that’s for another blogpost ;-)).
Symptoms
Our DevOps builds
had been acting strange for a while – just not “very” strange ..
. In fact: when a build failed with a
strange error (yep, the above one), we would just retry, and if ok, we wouldn’t
care.
That was a mistake.
Since our move to
Wave2 .. the majority of the builds from only 1 of the 12 apps failed – and
even (what never happened before), the publish from VSCode failed as well with
the same error message:
Insufficient stack to continue executing the program
safely. This can happen from having too many functions on the call stack or
function on the stack using too much stack space.
We are developing
with a team of about 9 developers – so people started to NOT being able to
build an environment, or compile and publish anymore. Sometimes.
Yes indeed: sometimes.
I had situations where I thought I had a fix, and after 10 builds or
publishes – it started to fail again.
And in case you
might wonder – the event log didn’t show anything either. Not a single thing. Except from the error above as well.
What
didn’t help
I started to look at
the latest commits we did. But that was
mainly due to the upgrade – stuff we HAD to do because of the breaking
changes Microsoft introduced in Wave 2.
Since it failed at
the “publish” step, one might think we had an install codeunit that
freaked out. Well, we have quite a few
install-codeunits (whenever it makes sense for a certain module in that app) ..
I disabled all of them – I even disabled the upgrade-codeunits. To no avail.
Next, I started to
look at the more complex modules in our app, and started to remove them ..
Since one of the bigger modules had a huge job during install of the app – AND
it publishes and raises events quite heavily, I was quite sure it was that
module that caused the pain. To test it,
I removed that folder from VSCode, made the code compile .. and .. things
started to work again. But only shortly. Moments later, it was clear in DevOps that
certain builds started to fail because of the exact same error. From victory .. back to the drawing board
;-).
Another thing we
tried was playing with the memory on our build agents and docker hosts. Again, to no avail .. that absolutely didn’t
help one single byte.
…
And I tried so much
more .. really. I was so desperate that
I started to take away code from our app (which we have been building for over
6 months with about 9 developers (not fulltime, don’t worry ;-)). It’s a whole lot of code – and I don’t know
if you ever tried to take away code and make the remaining code work again ..
it takes time :-/. A lot!
What
did help
It took so much
time, I was desperately seeking help .. and from pure frustration, I turned to
Twitter. I know .. not the best way to
get help .. but afterwards, I was quite glad I did ;-).
You can find the
entire thread here:
I'm getting desperate here .. anyone has seen this error when publishing from VSCode? (or buiding from DevOps). #AL#msdyn365bc No installcode, no upgradecode, enough memory, no code analysis issues, …. pic.twitter.com/QM8PVKgZ8u
First of all: thanks
so much for all of the people for their suggestions. There were things I didn’t try yet. There were some references to articles I
didn’t find yet. All these things gave
me new inspiration (and hope) .. which was invaluable! Translation files, recursive functions, event
log, dependencies, remove all code, force sync, version numbers, …
Exactly the same
error message, with a big xmlport. It
first pointed me to the wrong direction (recursive functions / xmlport) ..
But after one of our
developers remembered me that from months back, we also had a big object: A
1.2Mb codeunit, auto generating all Business Central icons as data in a table,
to be able to use them as icons in business logic. Initially I didn’t think it would ever have
an effect on the stability of the app (in this case – the inability to publish
it) .. we wrote the damn thing more than 4 months back, for crying out loud :-/
and the code was very simple – nothing recursive, no loops, very straight
forward. Just a hellofalot of code
;-). But .. It doesn’t hurt to try what
it would do when I would remove the code .. so I tried .. and it works
now! Victory!
Conclusion
The size of a file (or object) does matter. If you have the error above – it makes sense
to list your biggest files, and see if you can make them smaller by splitting
the objects in multiple (if possible.. ).
While in our case,
it was one huge object in one file. And
I don’t know what exactly was the problem: the size of the file, or the size of
the object. There is a difference. If I wanted to have kept the functionality, I
might have had to split the object in multiple codeunits, and on top of that I
might have had to split them in multiple files (which – in my honest opinion –
is best practice anyway..).
Also, I have the
feeling that Wave 2, is a bit more sensitive to these kind of situations.. I
don’t know. It’s just – we had this file
for quite a while already, and it’s just with the upgrade to Wave2 that it
started to be a problem.
In any case – I hope
I won’t wake up tomorrow, concluding the error is back and all the above was
just one pile of crap. Wish me luck ;-).
I got quite a week ahead of me .. . Not only will I host one session and some workshops .. I will actually host 2 sessions, 2 workshops and an ISV session this year. What did I get myself into?
No repeats!
If you look at my session schedule, and you have visited Directions EMEA, well, you might wonder if I’m “just” redelivering content at NAVTechDays. Well .. No! Totally not, actually. Without giving away anything – let me try to explain …
Development Methodologies for the future (Thursday
11:30 – 13:00)
First of all, if you attended my session at Directions, you noticed that there, I prepared actually 3 sessions, and the audience chose the topic of that particular session. I was lucky that all three topics that I prepared were quite equally popular while the audience was voting – so I would be stupid to do just a repeat. No, I will actually slice a completely different topic as I did on Directions EMEA. All new content – and more ;-). More details Thursday at 11:30 during my session ;-).
{Connect App}² (Friday 11:00 – 12:30)
My session with Vjeko at Directions was “Connected Apps” .. . This one is “Squared” ;-). Which means: more! Much more! So, If you attended that one on Directions, and you thought we took it “far” – well – think again! Just to say, also this one is not a repeat. How could it be? At Directions, there were only 45 minutes ;-).
Workshops
Also this year, I will be hosting workshop during the predays, which I always look forward to. I just hope that the internet will be good, because I will be quite dependent on it ;-). I prepare individual Azure VMs for every attendee to make it as comfortable as at all possible .. but that means: internet! ;-). What I will be doing is something I have been doing quite a lot …
Developing Extensions for Microsoft Dynamics 365
Business Central – Introduction (Tuesday)
For the people that
are putting their first steps into AL development.
Developing Extensions for Microsoft Dynamics 365
Business Central – Advanced (Wednesday)
For the people that
already have put their first steps .. but still feel they need some guidance
for the “stepping” to feel comfortable (if that’s an explanation at
all ;-)).
ALOps (Thursday 15:40 – 15:55)
And if that’s not enough .. I’m doing yet another session .. . This one is an ISV session for the product we have been working so hard on to get to the community: ALOps. We are a Platinum sponsor, which comes with an ISV slot – and I’m looking forward to speak to all people that are interested in “doing decent DevOps for AL easily” ;-). We will obviously also have a booth at the EXPO – please come by! We have stickers ;-). And we might just get your pipeline upt and running .. during the conference ;-).
In total, that means
I have about 19 hours and 15 minutes of content to deliver on NAVTechDays …
. Again .. what did I get myself into
:-/.
It’s over – the week
I’m always looking forward to for so long passes by in a blink of an eye: NAVTechDays. As Vjeko already shared the
goodies– I will do so as well – joined with my final thoughts and some
pictures ;-).
I feel myself old and repetitive by saying this conference is “something else”. Just imagine: quality food: morning, noon and evening, quality (recorded) sessions – all available to the community, 90 minute deep-dive topics, in quality seats, with quality media equipment, and quality speakers (uhum ;-)) – all captured by a quality photographer. Quality! That’s NAVTechDays: no dime is spared to provide THE BEST conference experience anyone could wish for. From start to finish! Unmatched on any level. This year, there was even a hairdresser, I kid you not ;-).
As predicted, my NAVTechDays was a bit too busy. So much content in only a week .. I have to admit – it’s simply too much. I probably won’t do that again – and if I do – I’ll at least have this blogpost to hold onto to declare myself crazy .. again ;-).
One special thing I was really happy to be able to do: I got my parents into my session. That’s a special feeling, I can tell you that. You always try to explain what you do, and what impression it has on you – but they can only understand when they actually experienced it :-).
I can’t judge if my sessions were well received. Thing is – I realize that the topics I talk about – the opinions I evangelized are not always the opinions that are shared by all of you. Like “Code Customized AL” or “embracing dependencies” to just name a few. I know people that are passionately for code customizing AL – and who are passionately against any form of dependencies.. . Well, I realize that it can have its effect on how the session is being received (like: complete bullshit ;-)). All I can do, is share my experience, and what I believe makes sense to go forward .. and I still stand by 100% I have been advocating ;-). And yes – in the “real world”.
In any case … as said, you can find my sessions on mibuso and on youtube here:
In the next
weeks/months, these videos will also be turned into a series of blogposts. I already planned a few – and Vjeko is already blogging is ass off as well ..
. Expect a lot, soon (or late – no
promise ;-)).
All there’s left for me to say is: thank you! Thank you for joining my session, thank you for joining my workshops, thank you Luc, for making this happen for all of us – it’s a real honor to be a small part of it! Thank you, Vjeko, my bro, for sharing the stage with me :-). Awesome week!
Picture time!
Ma waldo
All headphones were sold out for my ISV session on ALOps
Bros!
Yep, an hairdresser on the conference – and what I noticed: always busy
The master: mister MiBuSo.
This moment right here must have been one of the most stressful moments of my life. No BC SaaS, no CustomVision, no screen mirroring – and only 5 minutes to go before our session
NAVTechDays wouldn’t be NAVTechDays if there was no beer …
47 countries! Amazing!
We brought a simple, but very “attractive” game to the ALOps booth. Not only ALOps was well received, but the game certainly as well
In both sessions, I talked about the concept of “dependencies”. Yes indeed – in my opinion, “dependencies” is an opportunity that we should embrace .. (just watch the “Development Methodologies” session if you want to know how and why). Now, during the sessions, the RESTApp was actually just an example on how we internally “embrace” the concept.
What does it do?
Well .. not much, really. At least if “making your life a lot easier” is “not much”, that is ;-).
It just “encapsulates” the complexity, the functionality, the responsibility, the “whatevery” that has to do with “REST Calls”.
I mean, did you ever try to use the httpclient-wrappers to do any kind of “decent” webservice call? Did you ever fumble with setting the “content-type”, or any kind of headers? And honestly – did you spend more than 5 minutes to make it work? Well, if all answers to these questions are “yes” .. Then you will appreciate this app ;-).
Or as a picture tells you more than an 1000 words, it turns this code:
Into:
I hope you agree that this second way of handling the same call, is a LOT easier:
Not having to declare all these different http-types, just the RESTHelper codeunit from the RESTApp.
Not having to care about the order of adding content type or headers or anything like that
Not caring about error handling.
…
The current functionality includes the following things:
A helper codeunit for REST calls
A helper codeunit for JSON stuff
Logging of the request and the response message
I could have done that with a simple codeunit, dude
I hear you. “Why an App? Why implementing the complexity of dependencies for such a simple thing?”
Well, it’s not just there to make your coding easier. It’s there to make the whole lifecycle of how you do webservice calls in all your projects easier. Just think about it. You are probably going to do REST calls in many different products and/or implementations. And many of them are different, need more functionality, …
Or – you just might run into a bug on how you do it, and have to update a bunch of other implementations .. .
Or – at some point, you might figure that having a decent logging for all outgoing REST calls would be interesting (and let me tell you that: it IS interesting (and yes, it’s already included in this app))! If you have implemented a RESTApp like this, a simple update gives you this new functionality on ALL your projects. Simply release it to all your customers (long live DevOps!). You can update all you want .. as many times as you want.
Or – at some point, you need to be able to set up “safe” sandboxes, and need to overrule all webservice-calls in a sandbox to not risk “live” calls from a sandbox (guess what – this IS something to think about!)? Just update this app, deploy, done! On ALL your customers.
I can give you lots of scenarios, to be honest.. . But tell me again – how is that codeunit going to help you in any of this?
Just an example
I know, I already had this subtitle :-). But now, it’s more like a disclaimer.
Don’t see this code as being a leading app. It’s meant as an example .. and nothing more. It is not the version we are using internally (which I’m not allowed to share, as I don’t own the code). It doesn’t have “authentication helpers” or anything like that. And it probably doesn’t have all the functions that are necessary to do all kinds of REST calls. Obviously, this is where you can come in :-). May be it’s not a “leading app” now (if that’s an expression at all) .. you can help me make it one ;-). Please feel free to do any kind of pullrequest. Anything that might help the community. Change, restructure, .. whatever!
May be, at some point, it’s mature enough to pullrequest into Microsoft’s System Application ;-). In this current state, in my opinion, it isn’t.
Ideas
I do have some ideas that I want to include in this. Like making it easier to work with different authentication-types. Or including a way for a test-url, like request-bin, that replaces all calls with this url for you to be able to track the actual request calls that are being generated.
If you have ideas, or remarks, or anything, you can always leave a comment – or use github and use the “issues” section to add ideas (or issues).
Let me be clear:
this post is NOT a recommendation that you should use Docker for your OnPrem
Customer’s production environments. Not
at all. This is merely a blogpost about
the fact that I wouldn’t mind Microsoft to officially support Docker as an
alternative to an NST deployment.
Just imagine you
would be able to continuously upgrade your customers. This has actually quite an impact on your
daily life .. on anything that involves a life of a partner: from support to
customer deployment to hotfixing, to release management, … .
Let me give you a
few examples – and I’ll do that with some extreme numbers: either we have 300
customers, all on a different versions – or we have 300 customers all on the
same version:
Product
releases
In a way, you need
to be able to support all product releases on all versions of Business Central
(or NAV) that you have running at your customers – it doesn’t make any sense to
support a version that isn’t running at any customer, does it ;-)? If a customer is running v13 of your product,
you need to be able to hotfix it, and roll out the fixes to one or more
customers with that same version.
Even more – not
only, you’d have to keep track of all the versions/releases/customers – you
need to manage the hotfixes, and bump it to all versions/releases necessary (a
hotfix in v13 might be necessary in 14, 15, .. as well) .
On the other hand –
if everyone would be on the same (and thus latest) release: everyone can be
treated the same, and hotfixing is easy, rollout is easy, administration is
easy. Simply because there is only one
product release to maintain (you start to get why Microsoft is pushing us to
the cloud, right? ;-)).
In order to
facilitate this in Git/DevOps, one way is to create (and maintain)
release-branches for all supported releases.
On top of this, you have to maintain for all these branches a dedicated branch policy, build
pipeline, artifact feed and what not .. .
Good luck doing that for 300 different versions.. .
Support
I think we can all
agree that our support department would be so much relieved if they would only
have to support 1 version/release, right?
All bugfixes/improvements/features/tooling/… are just there.
Bottom
line
The easier we are
able to upgrade a customer to the next runtime of Business Central.. the more
customers WILL be on the latest Business Central and version of our product ..
the easier it is to manage our product .. The easier it is to support our product
.. The easier our life is. It’s a simple
fact. No science needed here …
Upgrading
an OnPrem customer
You might know my
opinion on “code customizing AL” – if not, you can find it in this
post. In a way – for me – “code
customizing AL is evil” ;-). So ..
In that perspective, I’m going to assume we are all on extensions/apps .. and
all we have to do is manage apps at customers.
In terms of
upgrading – we would upgrade apps to new version of apps, which is quite easy
to do. You can prepare all upgrades in
upgrade codeunits, so in a way, when prepped correctly, upgrading is just a
matter of installing a new app the right way (by triggering the upgrade
routine). I will not go into how to do
this.
But that’s not all …
We also have to
upgrade the platform, the runtime. Not
the client anymore (thank god ;-)), but still all the NST’s and other
binaries we have installed. At this
point, it’s still quite manual: “inserting DVD and start
clicking”. I know it’s scriptable
.. heck, I even created a function once to “easily” upgrade an
existing installation by calling the “repair” option from the DVD
(you can find the script here),
but honestly, in a world with Docker …
The
Docker Dream
Just imagine – all
you do to install an OnPrem Business Central customer is to install a real SQL
Server for the database, and use the docker images provided by Microsoft for
the NST. Why only the NST? Well, that’s the part that needs to be
upgradable like every single month.
But when on Docker,
you know how easy it is to set up a new environment, right? How easy would it be to upgrade, to set up
UAT environments in other versions, to “play” with localizations, ..
. Well, as easy as we know already by
using Docker – but applying this to a production environment would really
eliminate the complexity to upgrade continuously.
Honestly, I think
this is the missing link to be able to implement full “continuous
upgradability” for OnPrem customers.
We
already do this …
Call me nuts – but
for our internal database, which is completely our own concern, we already have
this running as a proof-of-concept. And
it has been running for so many months without one single problem :-). I shouldn’t say this, but it has been making
upgrading and maintaining this particular environment (with +20 apps) so much
easier that we are really wondering “why not” for customers. We won’t, obviously, but still … we dream
;-).
Vote!
If
you agree with me, then you also agree with Tobias Fenster, who has created an
idea on the ideas-site which you can upvote – please do! If you don’t understand a single thing about
Docker or what impact it could be for us – than just take my word for it and
still upvote it here: https://experience.dynamics.com/ideas/idea/?ideaid=daf36183-287e-e911-80e7-0003ff689ebe
First of all, this is my first post of this new year, so I’d like to take this opportunity to wish you all the best and happiness for 2020! Last year, I did a post on my new hobby “3D Printing”. Well, now we’re a year later, I’m still printing almost full time 24/7, so let me wish you all the best with some 3D Printed goodies I did for Xmas ;-).
So now – “let’s get started” with ..
Why would I rename all my AL-files?
Well –
because of Microsoft and their new (upcoming) code analysis rule about file
name conventions. You might already have
complied with their previous conventions, and … they will change. Even better: they actually already changed ..
Just read here:
In the
“old” days, Microsoft wanted you to name the files including the
object numbers. But not anymore.
This
doesn’t really have to be a problem .. because today, we can freely choose
however we want to name our files.
But that will change (a bit) …
If
you use the insider version of the AL Language Extension for VSCode, you’ll see that we will get new
code analysis rules that check the file name. This is an example that I had
with my waldo.restapp I recently blogged about:
It
doesn’t matter what my previous naming convention was – the result is that all
of a sudden, te compiler is
going to have an opinion about it and will identify it as a
“problem”. It even calls it “incorrect”, not just “not
following the guidelines” or whatever.. ;-).
But I
care. I want to comply with as many
coderules .. as long as they don’t “hurt” me or my company. That’s why we have a simple step in the build
pipelines of our company: if any coderule would fail with an error or warning –
the code is not accepted by the build.
And admittedly – this is a good naming convention, anyway. So .. let’s comply!
Waldo’s CRS AL Language Extension
As you might know – I created a VSCode Extension that makes it possible to easily handle file naming conventions. Because of these new guidelines (and coderules), I recently added a new option “ObjectTypeShortPascalCase” so that now it’s also possible to handle it the correct way with settings in this extension.
BUT
Don’t
rename your files by just invoking the command “Rename All Files” –
DON’T:
Because:
You might change all your code history
You might have wrong settings
You might not be able to rollback and rethink ..
So, let me try to give you the safest way for you to rename all your
files
I’m going to assume you are source-controlling your code here. If not .. I don’t even want to talk to you ;-).
Just kidding – if not, just quickly initialize a git repo from the current state of your code. You might want to revert all changes you did .. and this gives you a way to do that. Honestly, this particular rename I did for this blog, I did 3 times .. just saying ;-).
1 – Create a new branch
Before
you do any coding, you should branch out.
Same with this: just create your own snapshot of changes to work
in. You can’t be too careful ..
You
created it? Well, now you’re safe. Any kind of mistake within the next steps –
just remove the branch, create a new one, and start fumbling again ;-).
2 – Change the setup
I think this is a good setup for the new filename conventions:
Notice that there is no property “RemoveSuffixFromFilename” or “RemovePrefixFromFilename”. These properties invoke an extra procedure after renaming, to make sure you have no prefix in the filename anymore. Thing is – it would make the name not comply again with the naming convention of Microsoft’s coderule, unfortunately.
3 – No Multiroot
Also notice the “renamewithgit” setting. This is important, but a difficult one. If you don’t set this to “true”, any rename would result in a “delete” and “create” of your file – meaning you would lose all the history of your files.
In my world of DevOps and Source Control, this is unacceptable. Really unacceptable. I want my history kept at all times! So, this setting is going to make sure you rename with git (“git mv”).
Now – in
all honesty – I have spent too much time already to make this setting as stable
as at all possible. And it still isn’t.
Sometimes, it just doesn’t do what it’s supposed to do. Known issues are still:
new files in combination with a multiroot workspace, it acts crazy (just deletes the content)
With the “rename all”, it seems to be too fast sometimes, which fails the rename, which loses the history, which makes me angry ;-).
For the first, I have built in some fail saves, which basically will NOT rename with git of you have a multiroot workspace (not my preferred option, but for now, I have to give up on this – spent too much time already :'( ) ..
And
the second – that’s why I write this blogpost ;-). How can you still rename all your files
successfully, even when it would fail on your first
attempt.
But – to
solve the first (you DO want to “renamewithgit”), just avoid using multiroot workspaces. If you have multiple workspaces open, just
close VSCode, and open that workspace of that one app you would like to rename
all files from.
4 – Commit these settings!
You might want to revert to this very version of this set of files when any of the next steps would fail for some reason or another. So please, commit now!
5 – Rename – All Files
Now is
the time to start renaming all your files.
Remember you set the “renamewithgit”,
AND you’re not working multiroot, so it is going to try to rename all with
git. Indeed: try!
Because more than likely, “some” will succeed, and
“some” will fail. If you have
failed ones, it will show the “crs-al-language” output like this:
It seems that one process is blocking another to execute decently .. and because of that, it is actually going to rename the classic way, which would mean, it doesn’t preserve your git history.
When the
complete rename is done, you should also have a look at the Source Control
pane. If you have fail-messages, you
should get some files in “Staged Changes” and some in
“Changes”
Well: Staged are fine, and the ones in
“Changes”, you’ll need to retry.
Do this by “Discard All Changes”
And
“Discard All x Files”
Now, you basically only renamed the ones that were able to rename, and you staged them. The ones that failed are restored to their original state, which still has all history, but are not renamed. So ..
6 – Repeat …
it’s obvious .. this is where you repeat. So, rename again, and discard the changes if you have failed ones. And keep doing that, until all files ended up in your staged area (the “R” in the back indicates that you renamed the file):
7 – Commit, PullRequest …
And this
means: you’re done. Time to commit and pullrequest
your changes to your master/release/dev.. .
Conclusion
I’m not
proud of this workaround – I’d like to see it “just” work. But when it comes to “renaming or moving
files with git”, there seems something odd. I mean, think about it – a simple rename in
the VSCode Explorer window also ends up with a “Delete/Untracked”,
meaning – you lose history.
A
drag&drop to a new folder as well.
So –
just my conclusion – if the VSCode wizards are not able to solve this simple
“rename” issue – why would I be able to do it better than I already
did ;-).
I did a
poll recently during one of my sessions – and I was surprised that about half
of the people don’t regularly use “snippets” in VSCode.. .
Well,
some of you probably know that I’m a big fan of snippets. Whoever has joined one of my sessions of the
last couple of years where I was talking about VSCode, or was working with
VSCode .. most probably, I was talking about, showing or using snippets.
Honestly,
I can’t live my life in VSCode without snippets. It would be so much less efficient .. . VSCode wouldn’t be much more than a notepad
that is hooked up with Source Control in that case.
Back in
2017, I even spent quite some time in my session on NAVTechDays about snippets. I’ll share the session at the end of this
post…
… is a json-file somewhere
on your roaming profile
… have a specific syntax,
which makes you give a description, a prefix, a body, … and for defining
placeholders, including multi-cursor placeholders
… can use variables, like
the filename, date or your clipboard, which make it possible to create
very “generic” snippets.
…
All you
need to know is on the page mentioned above, and you’ll get going with snippets
in no time, I promise you!
My snippets
There is
more :-). I have been creating lots of
snippets in my
extension. I admit, I copied
Microsoft’s snippets and improved them – but I also have created lots of new
snippets. Ones that I use a lot in terms
of “design patterns”, but also for implementing code that I’m not
used to, and don’t want to forget (like the assisted setup). If you install my “CRS AL Language
Extension”, you’ll recognise my snippets with “waldo” in the suffix:
And yes,
if you don’t want to work with my snippets, you can disable them by a simple
setting:
"CRS.DisableCRSSnippets": true
(if you
might wonder, you can disable Microsoft’s snippets as well ;-)).
Tools that can help you to create snippets
I
recently was pointed to this tool: https://snippet-generator.app/ . When you are creating your
VSCode snippets, simply paste the text that you want to convert to a snippet to
this tool, and you immediately get it converted to a JSON representation for a
VSCode snippet. I tremendously speeds up
the creating of a snippets from minutes to seconds ;-).
On the
other hand, there is another tool that you can install in VSCode: the snippet-creator. It basically gives you a command that will
convert your selected text into a user-snippet of the language of your choice:
Whatever
you prefer – both work very nice :-).
Some questions I get a lot
Where are snippets stored?
The user-defined snippets that you create, are stored here: %USERPROFILE%\AppData\Roaming\Code\User\snippets.
The snippets that come from an extension, are stored here: %USERPROFILE%\.vscode\extensions\<extensionname>\snippets
Can I disable snippets?
Well,
no. You can’t in any decent way (that I
know of) disable snippets. I know I was
talking about a setting in my extension, and yes, that’s a way, but it’s not a
decent way ;-).
In fact,
what I do in that extension, is simply rename the “snippets” folder
to “snippets-disabled”. That
way, the extension is not able to find the snippets, and won’t show them
anymore. The downside of this is that it
will give errors in the background because it’s not able to find the snippets
anymore, like:
It’s not
really noticeable, but they are there… .
Can I change snippets?
Well, no
again. To be fair: you CAN change a
snippet in the extension folder, but do know that when the extension is
updated, it basically is going to overwrite the snippets … and you lost your
modification. So in my opinion, that’s
not an option.
That was
it! Hope you’re into snippets and this
blogpost was completely useless. If not,
at least I hope this got you triggered a bit ;-)! The only thing left for me is to share the
NAVTechDays session I was talking about earlier:
In short:
you need to pre-or suffix, especially for apps on AppSource, but actually on
anything you do!
AppSourceCop can help (let’s say
“force”) you
You are
probably familiar with : the codecop
with code analysis rules that are specifically for AppSource. Well .. when you enable this, it can help
you. And in the next release: it will
force you to remember to set an affix (which means: a suffix or a prefix).
You can
simply enable the AppSourceCop by:
In terms
of “affixes”, the AppSourceCop needs to know which you use. If you don’t do that – in the next release
(v16) – the analyzer will tell you that you HAVE to tell him:
What
you need to do
From the
documentation, it’s quite clear what you need to do:
Create a file
AppSourceCop.json in the root of your workspace (next to the app.json)
Fill it with the property
“MandatoryAffixes”, and provide all affixes that you intend to
use in the app. Here is an example
of the content of that file:
You’ll
see that IntelliSense will help you to complete this file… . As you can see, I also provided the supported
countries as also that will be mandatory (by a coderule) in release 16. But let’s focus on the affixes in this post
;-). What I told the AppSourceCop in
this case, is that I will use only one affix, being “WLD”.
Done?
Well,
you’re not done yet. All you’ve done
now, is told the AppSourceCop on which affixes it needs to check the objects
and controls. That way, it can
“remind” you that you need to pay attention to it, like here:
Obviously,
it’s your job now to take this into
account, and haven decent names for your objects, fields, .. .
Wait .. it’s not going to provide that affix for me??
No! But don’t worry – that’s where I come in
;-). You might be familiar with my
“CRS AL Language Extension” in VSCode that can handle renames of
files. Well, during the rename, it can
provide a prefix or suffix. So by
simply setting some extra settings for my extension (and yes – by providing the
suffix again), you’ll be able to get this going for you in an automated way:
In short,
this setting will automatically provide the WLD-suffix where it needs to when
you save a file.
There
is a disconnect though …
The
AppSourceCop allows multiple affixes .. and my CRS AL Language Extension does
not. So whenever you intend to use
multiple affixes, I’m afraid you’ll have to disable the automatic rename by my
extension, because it will only apply one.
Simply remove the settings about the suffix, and provide it manually
(AppSourceCop will remind you where ;-)).
Even more
– if you enable the CodeCop in the next release, you’ll even have another rule
to worry about: file name conventions.
So, if you enable codecop:
This is
something you could get in v16:
I
actually already
blogged about that very recently, so I’m not going to repeat that. BUT – this coderule is also going to assume
than when you used this “MandatoryAffixes” (which you have to in the
next release), that you DON’T use the affix in the filename (read the error
above carefully, and you’ll notice it doesn’t want my affix in it).
Well, I
had foreseen this setting for it:
But that
also is only useful for only one suffix or prefix at this moment.
In short
– auto renaming can become a challenge … :-/ … and that sucks…
Stay
tuned…
Headaches
that I intend to solve for you in the near future. This is what I intend to do in short term:
I’m going to read and take the “MandatoryAffixes” setting from the AppSourceCop.json file into account
When you have enabled “RemoveSuffixFromFilename”, I will loop all affixes, and remove it when it’s used as prefix or suffix. The danger there is that I’ll remove too much, or make the wrong assumptions, so I’ll make it case sensitive, and probably stop removing after I find one affix – just to minimize the risk. Future will tell how well it works. But this will hopefully solve the fact that the CodeCop wants us to disregard a multitude of affixes.
I will keep the “ObjectNameSuffix” and “ObjectNamePrefix” setting. It will act as a “current” setting. Meaning that when I find that one of the affixes in the AppSourceCop.json is already applied, I’m not going to apply anything anymore. If not, I’ll apply the one from the settings.. . I’ll try to make this case sensitive as well.
Or that’s
at least how I have it in my head now … .
I also realise that simplicity is key – I shouldn’t make it too
complicated to use this. So .. yeah ..
this did give me quite some headaches already ;-). So, if you have any idea, feedback, tip, …
please share ;-). I’m open for all
ideas.
As
you probably already picked up in the media – Microsoft has released its plans
for Wave 1 release (in April) for this year.
You can find all the information here: https://aka.ms/Dynamics365ReleasePlan.
The
document contains all the information on all we want to know regarding
“what’s next” for all related applications, like Marketing, Customer
Service, Field Service, …. and also our beloved “Business Central”. And since this is a blog about Business
Central – let’s focus on …
But
– I’m still going to talk about at least a few of these points. I did that before, and last time, I remember I did rant about one thing (Code Customized
AL) – let’s see if there is something to rant about in this release ;-).
What am I looking forward to?
You’ll see I’m focusing most on Tech. and less on functionality. That’s just me – sorry ;-).
I don’t know exactly yet what to expect from this and in which way
it will be used in default Microsoft Apps for Business
Central, but it is at least a new interesting capability in terms of code architecture.
Yes! Sounds perfect! By using Shift+Alt+E,
you’ll get a list of all available event that you can easily search through:
Tip:
it seems you can already use it – just use the vsix from the
insider-docker-image if you have access ;-).
Just be careful though – it seems to add all parameters, even if you
don’t need them.. (which in my book is bad practice).
When I read this, my immediate reaction was: “uhm, ok,
whatever”. I wasn’t really waiting
for this, however, I do think this can really come in handy. Let’s see…
I can’t say I needed this, to be honest, but in the future, when
real refactoring is going to be necessary, I really think this is very useful
to clearly explain the ins and outs of certain functionality that will have to
change, and how to deal with that if you depend on it.
But then again, in my company, we’re rebuilding the product from
scratch. If you’re converting/migrating,
I can imagine that “refactoring” will come rather sooner than later
;-).
You can see that the AL Language is steadily growing into a more
mature dev environment, isn’t it. This
is something that is so “normal” to have in any other language, and
finally coming to AL. Though, I must
say, there are other fish to fry, I guess ;-).
It seems – but I’m not sure – this is an answer on “how to
transfer data from a customization in the base app, to an extension”.
How this moves to BC online, I don’t know – but definitely worth digging
into!
This is obviously really important and need no further
explanation! I just hope it’s going to
be a painless process, because upgrading to 15 .. Has.not.been.painless.at.all! But a migration is not an upgrade either, is it? ;-).
Well,
I guess this “Application version for aliasing base application” thing. I’m not saying
it’s useless, but it looks like this is only added to facilitate “Embed
Apps” – which is nothing more than “Code Customized AL” (in my
opinion) – and you know how I think about that ;-).
Added to
this, the “propagateDependencies” has been a change that will be
introduced in the next CU Update – and it has brought me nothing but pain –
downloading symbols in a build pipeline just became more complex (luckily,
navcontainerhelper eases the pain ;-)).
Is that all that is new?
Of
course not! I just didn’t want to
“just” copy/paste all items just for the fun of it. All Tech things, ok, but there is so much
more that is on the plate:
Import profiles
and UI customizations– very necessary to be able to use this
more efficiently (by consultants and admins at customers). I just hope it’s going to be stable,
because I have been removing all customizations too often because of
client crashes lately :-/.
Saving the URL as a
bookmark, will include filters now – interesting :-).
Improvements to
data entry is something everyone likes to read in regards of a web
client. But it’s not clear what it
actually includes (to me) yet.. .
Monetization
for AppSource apps – for sure. I might
have misread or missed it, but
I still don’t see it on the roadmap.
Why? Why oh why? Do we really want 100ds of apps on there with
all a different implementation of monetization?
That doesn’t make any sense.. .
You can
see he has much more experience with an actual “load” of Customers on
BC SaaS. And I can only agree with his
list and argumentation.. .
Ideas
Anyway
– a big “driver” of this content is the “Ideas” website
where they gather ideas from the community.
You can easily access it by: aka.ms/bcideas. If you miss out on anything, the first
question you should ask is: “did I ask for it?” ;-).
Conclusion
Nothing
much to conclude besides quite the
same as Erik : nothing groundbreaking, but definitely improving and a
further evolution.
NAVTechDays
is already over for a while .. and yes, I already
blogged about it. But I recently had
to refer to a part of my
session on “Development Methodologies”, and I noticed that
someone named “Marcus Nordlund” actually put quite some time to
completely “menutize” the video in the comment section of the video
:-).
Awesome
effort that I needed to share! Thanks, Marcus!
From evaluations this session was evaluated as “Best Session” and me as “Best Speaker” of the conference – something I’m really proud of given the awesome content and speakers every single year :-).
You
might have figured – I’m a VSCode fanboy.
One of many. You might remember
the session I did on NAVTechDays 2017 (Rock ‘n
Roll with VSCode), where I dove quite a bit into
the possibilities this great tool comes with.
But I didn’t talk about the concept of “Multi-root Workspaces”: an ability of VSCode
for you to work on multiple
“projects” at the same time, in one environment.
2
years later, last NAVTechDays, I talked
about Dependencies quite a lot, and when you think
of it – in terms of dependencies in AL for Business
Central, these “Multi-root Workspaces” might make a lot of sense, because when you
have all these apps, you might have to work on multiple apps at the same
time.
Even
more, in that same video, you’ll see that I ALWAYS have at least 2 apps: an
app, and its test-app. So in terms of
“Test Driven Development” (a methodology I believe is indispensable), you will always have an app,
and a dependent app, and you will always work on both apps at the same
time. So – “Multi Root” to the
rescue!
What
Well,
the concept of “multi-root workspaces” is actually most simply
explained by: opening multiple projects (workspaces) at the same time, to be
able to work on multiple pieces of software at the same time. Or in terms of Business Central: to work (compile, publish, develop, …) on multiple
apps at the same time.
The one
downside of this concept is that not every VSCode extension might be ready in a
Multi-root environment. In fact, it took
a while to get my own extension (the CRS AL Language Extension) ready for
Multi-root workspaces (and there still might be issues with it ;-)).
Same
for Microsoft’s AL Language Extension. It took a while – but just imagine:
When you’re
working on extension A, from which Extension B is dependent – Extension
B will recognize the symbols of Ext A without even having to create (compile) an app file –
just because Ext B is in your Multi Root Workspace
When you’re
debugging, you’re able to go to definition into al-files of
other apps you’re debugging – not just the only-one-level-dal-files. Just because you have the dependent app
in your workspace.
Whenever you compile a main
app, the symbol files of the dependent apps are updated with the new version of the
main app. Just because it’s in your
multi-root workspace.
All this is already possible!
And
that’s what a real
“Multi-root” experience can give you – and why I think we should
always look into this. It makes all the
sense in the world – in a world with “lots” of apps and dependencies,
to work on them simultaneously. And I’m
sure – if you’re not yet doing it – it can speed up your development process
even more!
In Practice
May be a
few screenshots that can show you how it could look like.
In this
case, I’m actually working on 7 apps at the
same time, with all of them having a test-app as well (which we use for
unit-testing). The last app is the
integration-test-app.
How does it determine what app I’m working in?
Well,
simply by the active editor. If I would want to compile
and publish the BASE-App, I would open one of the files of that app, and simply press F5.
Symbols constantly updated over all workspaces
In my
case, the BASE App is a library-app for all other apps. And if I would simply start coding in that
BASE-App, like this useless codeunit:
I
would – without even compiling my app – be able to code against this in any app that is
dependent from that BASE-App:
Basically
meaning: I can code against all apps at the same time, without even
downloading/updating symbols, or compiling or publishing.
Updating Symbols
Even,
when you hit compile, you’ll notice that the symbol files in the
.alpackages-folder are updated for all apps that depend from the app that
you’re compiling. Very cool!
Can I control this a bit?
Well –
you do have the “dependencyPublishingOption” in the launch.json,
which has 3 options:
Default: set dependency publishing
will be applied
Ignore: Only the current project is
published, dependencies are ignored.
Strict: Publishing will fail if the
project has apps that not part of the workspace and depend on it
I
like the default setting – something it takes a little longer to include all
dependencies in the compile – but at least you’re checking all dependencies ..
So that must be good ;-). But, in some cases – especially in case of
big apps with many objects – you might want to avoid unnecessary recompiles of
dependent apps.. .
Does this mean everything needs to be in one (Git) repository?
No! VSCode is smart enough to handle multiple
workspaces at the same time. In case of
the screenshot above – and as mentioned in a recent webcast I did about handling dependencies in
DevOps– all my apps are in separate repos, together with their
Test-apps. So in case of the screenshot,
it would be a collection of about 8 GIT repositories. When you switch to the “Source
Control” window, VSCode clearly shows all states of the repositories, like
you see here in the screenshot:
You can
simply see when repos need attention (new/modified/delete files), what
branches, and what the synchronization state is. From this window, you can obviously also
change branches, sync, and so on.. .
LaunchJson_CopyToAll.ps1: If you change docker image or whatever – this script will simply copy one launch.json to all other workspaces so that you will easily publish other dependent apps to the same sandbox
Symbol_Cleanup.ps1: This script removes all symbol files – just imagine when you want to publish against another version (localization, insider, ..), you can simply cleanupt all files and even invoke the next script, which will …
Symbol_Download.ps1: This script will download as many symbols as possible for all workspaces that are available in your multiroot workspace.
I’m also
working on a “CompileAll” script – would be nice to have one script
that figures our the dependency tree (I
already got that one), and invoke start compiling all the apps – and – may
be – if possible – even starts publishing them to the server instance ;-). Let’s see where we end up with that one ;-).
There
might be issues with the scripts – they are brand new – but please, use,
abuse .. and contribute ;-).
You
probably know about the twitter hashtag “bcalhelp”
.. A way for you to ask help on twitter about anything AL for Microsoft
Dynamics 365 Business Central.
Well – yesterday, it made me smile … there was someone that was trying to find something on Vjeko’s blog. Pretty clever to use the bcalhelp for it :-), because he immediately got response. Here is the twitter thread (if you can call a question/answer a thread ;-)):
There is no search button indeed. Try with /s?=[criteria] ending, like this:https://t.co/OvDpITUHsW
You see that “trickynamics” got immediate response from “Marton Sagi” .. And if you don’t know Marton Sagi – then you probably realize that you actually do, because it’s the guy behind the AL Object Designer– one you should add to your VSCode Extensions ;-).
Anyway –
not being able to search Vjeko’s blog is something that needs to be fixed. It’s unthinkable. So I decided to contribute to that – and I
would encourage all the social-engaged people in the community to blog or share
this one on all levels ;-).
How
to search Vjeko’s Blog:
Well, it’s a wordpress blog, and every wordpress blog can be searched by adding “/?s=querystring“. Just try to search my blog on the top right, and see what URL you get. A similar thing you can do with vjeko’s blog.
How
come the search is removed? Doesn’t
Vjeko want us to search his blog anymore?
Well, I talked to him, and he is actually in the process of moving his blog to his own server – and moving a wordpress apparently comes with some challenges .. so let’s give him some time, I guess ;-).
Disclaimer
Yes, with all the heavy content out there, I thought it was time for a lightweight article – which is mainly meant as a joke to put a smile on some faces ;-). Enjoy…
Not too long ago, I did a webinar for ALOps. The idea of these webinars is simple: how can you get started with DevOps for Microsoft Dynamics 365 Business Central. And I’m not going to lie: I will focus on doing that with ALOps. But that’s not the only focus of these videos – I will touch lots of stuff that has nothing to do with ALOps, but more like strategies, simple “how to’s” and so on – which should be interesting to anyone that is trying to set up DevOps in any way. The webinars will end up on YouTube, and in the description of the video, I will try to give an overview of the handled topics, and create direct links to the exact place in the video. Here is an example of the description-section of the first video:
Now, In
that webinar, I explained just in a few minutes how to setup a
build agent for DevOps. And
actually, there is more to say about that – so I’d like to extend on that a
little in this blogpost.
What
is a DevOps agent?
Just
think of it: whatever you define as a build, whatever you will define as a
release – scripts need to be executed on some kind of environment by “some
service on some server”. A DevOps
build agent is this “some service”: a service that is installed on
“some server” that will execute the steps that you define in a
pipeline.
How
you do that?
Before we
go into that, let’s first talk about WHAT you will need. And that is “Docker”. I strongly believe that if you don’t build
with Docker, you’re building the wrong way.
It would mean there is some kind of database/serverinstance that is
waiting for you to build and such – which would have had to been cleaned every
start of a new pipeline (take schema-changes into account and such..). While building a docker container every start
of a pipeline would mean you have a clean isolated environment every single
build. You can’t get any more
stable/isolated/encapsulated than that.
So ..
long story short – let’s assume we all use docker for our build pipelines .. so
in that perspective .. our server with DevOps agent needs to be able to:
Use docker
Setup containers that have a certain localization and version of Business Central
Well –
now we know that – let’s review our options
that we have for creating a DevOps agent:
Microsoft-Hosted agents: Not really my favourite. Microsoft has agents ready for you to use. First, it seems very interesting and secure, but there are challenges:
You have docker, and in many pipeline runs, the same image can be used. But since the Microsoft Hosted Agent (VM) is discarded after one use (which is secure and all, sure), your next run will again have to pull that image from the docker repo .. No way to reuse docker images.
If something goes wrong, there is no way for you to fix / investigate / replicate the problem on the agent (no way to remotely log in)
Slow: not because of resource (that’s quite ok), but because of the fact that you’ll have to download the entire docker image every single time
Security limitations: it could be that you run into PowerShell/Security limitations that you don’t have under control, like local file access, the ability to download license in a secure way, .. these things.. . It’s hard to specify which exactly, but if it happens, it is difficult to work around – and usually it means you simply have to accept that that specific step is not going to work on a Microsoft-hosted agent.
Self-hosted agents: that’s where I will refer to that section of the video I was talking about before ;-). It’s my favourite option, because this can be fast, cheap, redundant, flexible, debugable, … however you want. And as you can see in the video – it really doesn’t have to be difficult to set up a new agent.
AzureVM: Microsoft has foreseen a nice and easy way for you to use a Azure VM as your build agent. Simply use the template: aka.ms/getbuildagent, fill in the parameters, and everything is done for you – a few minutes later, an agent will pop up in your agent pool. It can’t be done easier, in my opinion. But ..
So,
what is your preferred way, waldo?
Well, as
said, the Microsoft-hosted agents definitely not. fast running pipelines are important in my
opinion, and this is certainly not a way to have it running fast. Just look at this
build pipeline on a public project– the “start docker” will
always pull the image and then run the build.
Don’t get me wrong – the agent itself seems to be pretty fast (faster
than some of my other self-hosted agents), but the fact that it needs to
download the docker image every single time .. doesn’t make sense. If anyone knows a way to prevent this – I’m
all ears.
For the
same reason, AzureVM isn’t my favourite either. It’s great to showcase, and as a backup
scenario (should you quickly need an extra agent). But in terms of DevOps build agents, in my
opinion, Azure is slow and expensive. I
would never use AzureVMs as a long term solution for my DevOps Build Agents.
So – I
guess you know my preference:
Self-hosted
agents
In
essence, in this case, you have complete freedom on:
The hardware: You want faster running pipelines? Invest in CPU power (Ghz, not number of cores). You want multiple agents on one machine? You can do it!
Where it is hosted: your own data center? Under your desk? Some kind of naked motherboard with some memory and a CPU on top of your server rack (and yes, we do have this ;-))?
What it is installed: you want docker or not? If docker, may be pre-load all necessary images at night?
So, the
way I think about DevOps agents:
They need to be easy to set up
They need to be cheap
They need to be as fast as possible
I don’t care about redundancy of one server (I set up a pool of servers which make it redundant)
I don’t care about failing hardware (it’s easy to set up, and again: there is a pool)
So, we
have been investigating cheap cloud solutions, like hetzner.de,
which offer cheap and fast cloud servers.
I think these are ideal if you insist to have a cloud server as a DevOps
agent.
And we have been investigating some OnPrem configurations. This picture is a POC in our serverroom: 3 motherboards with different configs (fast Ghz / lots of cores / lots of RAM / fast M.2, …).
We found
out that there is one actual really important parameter: that the CPU speed
(not number of cores, not RAM, not Disk speed).
So, now you know that, go get yourself the cheapest 5Ghz PC, and have
some insanely fast builds ;-).
Disable
Windows Updates
I know
this sounds weird. But think about
it. Windows updates can have a huge
impact on the stability of whatever Docker image you are using for your build
pipelines. Just take these blog posts
from Freddy into account:
May be I
don’t care about the hardware too much in terms of redundancy – but I do care
about my pipelines running stable builds .. and clearly a simple windows update
can mess up my build – all of a sudden, builds will start to fail. So my advice for a DevOps Agent (and actually
any kind of NAV or BC installation) would be:
Disable Windows Updates
Update Windows in a controlled manner, like:
Take a snapshot
Update
Run tests
If tests fail, roll back snapshot
You don’t
want your development to stop being able to build just because Microsoft
changed how they are handling 32-bit applications in a February update.. .
DOAAAS – DevOps
Agent As A Service
With
ALOps, we’re thinking about providing a service for anyone that needs a fast
DevOps Agent fast. A cloud service sort
of speak, that people can use, where their code can be built fast. Speed is key.
And cost should be minimum. And
focus on AL – meaning:
Pre-installed Docker
Pre-loaded Docker images
Best practices to optimize speed and stability for AL builds
Controlled windows update
Your license and code is secure
…
What do
you think – interesting? Or not worth
the investment? Always nice to have
feedback on this ;-).
Recently, I came across this post by Jack Mallender. An interesting idea on how to efficiently find AL Objects among your files. It basically comes down to using regex in combination with the global search functionality in VSCode, like (yep, I’m stealing this from Jack’s post – sorry, Jack ;-)):
It immediately
convinced me that this would be very useful for everyone, so I was thinking –
why not making it part of “waldo’s
CRS AL Language Extension“? It
didn’t seem too difficult for the experienced TypeScript developer – so for a
noob like me, it should be do-able as well ;-).
A few hours later – after a lot of googling – I found the 9 lines of code that made this easily possible
.. I’m not joking ;-).
So – I present to you – a new command as part of the extension: Search Object Names. Simply call the command, provide the searchstring, and look at the result in the search window:
Now, I made it so that when you are on a word, or selected a word in the active editor, it’s going to take that word as the default Searchstring. Just imagen, you’d like to go to the source of the code of a variable you’re on, simply make sure your cursor is on the word, and invoke the command:
Settings
May be it’s a bit overdone, but yes, there is a setting as well (CRS.SearchObjectNamesRegexPattern) , because you might want to search differently than I do .. . We were discussing that on twitter, and I just decided to not decide for you on how you want to search, but let you set it up if you would like to search differently as me. Let me give you a few options on what would be interesting settings…
Find the source object (default)
Pattern:
'^\w+ (\d* )?"*'
Setting in VSCode:
"CRS.SearchObjectNamesRegexPattern": "^\\w+ (\\d* )?\"*"
// Mind the escape characters
This is the default pattern, which means you don’t have to set anything up for this behaviour. Basically this pattern will search any occasion where it starts with a word, than optionally a number, and then your search string.. . In other words: the exact source object of the object name you’re searching for.. .
Find all references
Pattern:
'\w+ (\d* )?"*'
Setting in VSCode:
"CRS.SearchObjectNamesRegexPattern": "\\w+ (\\d* )?\"*"
// I just removed the "^" from the default setting, which indicates "search anywhere"
This pattern will
search any occasion in code – which means: also the variable declarations. Let’s say it’s an alternative “where
used”. I won’t set it up like this
as a default setting, but I might just change it ad hoc in the search by simply
removing that character.. .
Find anywhere in the name
Pattern:
'^\w+ (\d* )?"*(\w+ *)*'
Setting in VSCode:
"CRS.SearchObjectNamesRegexPattern": "^\\w+ (\\d* )?\"*(\\w+ *)*"
// basically added that there could be multiple words before the searchstring.
This pattern is
somewhat more complicated, but if you would not rely on your search term being
the beginning of the object, but rather “somewhere” in the object
name, you could use this one.
Indeed – I didn’t
see any official statement yet – but it’s obvious, v16 is the current latest MS
release .. and if you don’t believe me – just check Docker (the latest
“current” release is already v16 – docker image “mcr.microsoft.com/businesscentral/onprem”)…
I’m not going to
bore you with what is already online – I will simply point you to the resources I could find today:
Expect work to be
done! At this point, I’m moving our 18
apps to v16, and want to comply with the bunch of extra coderules and new
concepts Microsoft has foreseen in this post … . I promise you – a LOT of work :(. But more about that in a next blogpost ..
You remember this
post? I tried to warn you that when
v16 comes out, there will be a new code rule that will check your filenames –
and you’ll have to (if you don’t disable it) comply with the file name
convention of Microsoft. If you don’t
automate your file naming, then you’re in for some .. uhm .. challenges. I just made sure that the automation of the
filenames complied with Microsoft’s rules .. .
I need to correct my
store in that post though. I had been
working on this “RenameWithGit” setting, which didn’t work with
multiroot workspaces and had some other stability problems. Only after my post – thanks to a reaction
from James Pearson on twitter – I learned
there is a much simpler way to do this.
First of all …
Forget
about the “RenameWithGit” setting
Indeed – just forget
I ever built it. I’m actually thinking
of taking it away in the near future. I
already removed it from all my workspaces, and I strongly recommend you to do
the same. It doesn’t work like it’s
supposed to work .. and I’m embarrassed enough about it ;-).
What it actually
does in a stage is it will compare the files, and when more than 50% is the
same, it will indicate it as a “rename” it in stead of deleting the
old name, and creating a new file.
That’s smart!
And yes, indeed ..
I have been immensely wasting my time on the “RenameWithGit” setting
:(.
Can I
make sure everyone always stages before commit?
Well .. It’s
actually good practice to always “intentionally” stage. You must have seen this message already:
In VSCode, it’s
called “smartcommit”. But
honestly, in my opinion, the smartest commit is an intentional commit. I don’t like this message, and I switch it
off by setting this up in my user settings:
“git.suggestSmartCommit”: false
I’m not forcing you
to do so .. but this way, you can easily check in VSCode if the rename of the
file, was actually a rename of the file
– and not deleted and new files. Like it
was intended.
So,
what is now the safest workflow to rename my files?
Quite the same as I
mentioned in my previous post about this – but a bit different.
1. Create a new branch
Yep, I still recommend to do the entire
process in a separate branch. Of
course. It will give you a way out if
you messed up ;-).
2. Change the setup
The setup is actually very similar as in my previous post, only now with the “RenameWithGit” = false. To match the file name convention of Microsoft .. this is what I would use:
Alternatively,
you could add the “RemovePrefixFromFilename” or
“RemoveSuffixFromFilename” – but make sure you set
up the mandatoryAffixes-setting in the AppSourceCop.json as well, for the
CodeCop to accept the removal of the prefix or suffix.
3. Commit
This
commit is there because you might want to revert after a failed rename-attempt.
4. Rename all
This
is the same “Rename All” function I talked about in my previous post:
It
will rename all files of the current active workspace (the active workspace is
the workspace of the currently activated document (file)) – not all
workspaces. So you probably have to do
this for all workspaces separately.
Thanks to the “RenameWithGit” to false, I expect no
mistakes. I was able to apply this to
+6000 files today .. so it’s tested, I guess ;-).
5. Stage
This
is THE step I was talking about earlier.
Here you can check if the rename was successful – all renamed files should indicate an “R”, like:
When
you see that – you’re good to go and …
6. Commit
Just
commit, push and create a pullrequest to the necessary branch .. and you should
be done!
Wait
.. So I can do this in a multiroot workspace as well?
Yes indeed – this
flow does work in a multiroot workspace.
Do execute it for all workspaces separately though, like mentioned
before. That’s how I implemented it.. .
Conclusion
It’s a piece of cake. Really. So just do it, comply with the naming convention, and don’t feel like you’re cheating on Microsoft ;-).