Quantcast
Channel: Waldo's Blog
Viewing all 339 articles
Browse latest View live

Model Driven software development for Business Central with “mdAL”

$
0
0

You might already have seen my session on DynamicsCon about Design Patterns. During that session, I mentioned a tool “mdAL”. If not .. well, here it is (starting at the mdAL part):

This tool has seen some major updates – time to blog about it. ;-).

Disclaimer

I am not the developer of this tool – so not a single credit should go my way. The main developer of this tool is Jonathan Neugebauer, a German genius who made this for his master thesis, and who is putting this out there for free for us to benefit from. And I promise: we can benefit from this .. a lot!
I just discovered this tool by chance (basically Jonathan contacted me through my blog for a review) – and since I have been working on the very similar thing 18 years ago (VB.Net, generating code from Rational Rose (UML) models – don’t ask .. ), I was immediately hooked and saw immense potential!

Model Driven Engineering

To understand the goal of the tool, you need to understand what is meant with “Model Driven engineering”: it is a methodology that focuses on creating and exploiting domain models (let’s call it “design patterns”), which are conceptual models of all the topics related to a specific problem.

Indeed .. I got that from Wikipedia ;-).

Let me try to explain this by means of some …

Examples to convince you that this might be more useful than you might think ..

1- We want to create master entities where we implement the default stuff, like number series. A lot of this code is based on a pre-existing “model” of how to implement No. Series. Just imagine this can be generated in minutes…

2- We have a posting routine, based on a master table and documents. So we need journal, ledger entries, register, document tables, … and the codeunits for the posting routine. Every time quite the same, but a bit different. What we did in the old days: copy “Resource Journal” and all its codeunits, and start renumbering and renaming ;-). Just imagine this can be generated in a few minutes…

Don’t imagine, just install “mdAL” and “do”!

As you have seen in the video, all this is possible today, in AL, in minutes. Jonathan’s ultimate goal was to have a model driven AL, which basically means: you don’t write code anymore, you basically model it in an understandable language, and the code is generated for you.

Today, the tool has had a major upgrade, and now you can spit out code, without the necessity to have a full model. Basically meaning: you can now use it also as a pure code generator. In my opinion, this makes its functionality even more useful for the community!

The tool can be found on the VSCode marketplace here: mdAL – Visual Studio Marketplace. Please take some time to read through the comprehensive documentation he has provided as well here: mdAL (mdal-lang.github.io). It’s not long, and it’s well worth the read! It explains how to install it, a quick start, the snippets (yes, it has snippets – it’s not even going to be minutes, it could be just seconds  ), and so on.

Let’s see how to handle the above examples

1- just master tables

Consider this:

This is an mdal-file with a description of my master-object “Container”. All this can be created with mdAL-snippets – so this really is just a matter of seconds. Very readable:

  • A master-object “Container”
  • A cart and list page
  • A template “Name”

Then, we simply “Generate AL Code”

And as a result, you’ll get compile-able code in many objects, with many models implemented. The generated code can be found in the src-gen folder:

It has…

  • implemented the table, with:
    • No. Series
    • Search Name with validation code
    • A blocked field
    • Comment, which is a flowfield to the default (and extended) comment table
    • Fieldgroups are defined (dropdown and brick)
    • Record maintenance to sub tables (commentline)
    • AssistEdit for No. Series
  • a setup table for the number series
  • A list page with
    • Action to Comments
  • A card page with
    • AssistEdit implementation
    • Action to Comments
  • A setup page in the right model
  • An enumextension for the comment line

And more!

How awesome is this? Don’t need comments? Remove it! Don’t need No. Series? Remove it!

Oh – did I forget to add “Dimensions” into the equation? Hold my beer…

Done! 2 seconds.
Result: we now have Dimension fields and code where necessary, like:

And even with events:

2- Posting Routine & Documents

Not enough? Then let’s now take it to a whole other level. Posting routines and documents.
Consider this:

I agree – it’s too simple, I need more fields – but look what happens when you generate code. All these files were generated:

A glimpse:

  • All posting codeunits with boiler plate code
  • Source Code implemented and extended in source code setup
  • Journal, Ledger entry and register
    • The journal already has 15 “boiler plate” fields (did you see how many I defined?)
  • Header and line with all boiler plating
  • Document number series as well
  • Document status (release/reopen)
  • Posting No. Series

Just too much to mention.

My conclusion and recommendation

My conclusion is simple: this is going to save us a crapload of time! Even if you “just” need some master table and some sub tables (yes, sub-entities are possible as well!). In my opinion it’s an unmatched level of code generation in our Business Central Development world. This is going into the extension pack!
My recommendation: play with it! Get used to it! Give feedback to it if you have any! And once you’re comfortable with it – thank Jonathan! ;-).


Check Customer License in an OnPrem db – from the web client

$
0
0

This is my first post of 2021, so I’d like to take the opportunity to wish all of you all the best, all the health, all the safety, and .. may we finally meet again in person in 2021!
I’d be very happy to have much less of this:

And more of this:

Anyway ;-). Let’s get to the topic..

I got an internal question from our consultants on how they are able to access the license information of the customer for an OnPrem customer. There are a few scenarios where that’s interesting. Just to check which license is in production. Or how many objects or users did they buy. Well – anything license wise ;-).

And you know – a consultant usually doesn’t have access to SQL Server, PowerShell or anything like that. So it had to be accessible from the web client.

I was like: that’s easy! There are virtual tables that contain that information:
– “License Information” contains the actual text of the license file.
– “License Permission” contains the information on object level.
And since we can run tables, it’s just a matter of finding the right Table Number, and off we go.

You might remember my blogpost: Getting not-out-of-the-box information with the out-of-the-box web client ? It explains that I can get to all objects through table 2000000038, and there, I can find the corresponding table numbers:
– “License Information”: 2000000040
– “License Permission”: 2000000043
So what the heck – just run it with that ID, and that’s it, we’re done! Consultants have a solution!

Well … no …

In the web client, you can’t just run every system table. I suppose, the ones that is not actual data in a SQL Table, and that need to get the data through some business logic, can’t be shown while running the table.

So here’s a little trick

You can show those tables by creating your own page based on the table. And with the wizards of Andrzej’s extension, it’s super easy to do!
Just:

Create a page (List):

Add the fields:

Done!
When you run this page now, you’ll see the license information from the Web Client

Is this useful? For us it is. Our consultants now have a page (it’s part of one of our library-apps) they can easily check themselves without bothering someone who is familiar with PowerShell.. .

If you want to know how to do it with PowerShell – well – there is an out-of-the-box CmdLet: Export-NAVServerlicenseInformation

I personally didn’t see another easy solution for getting to that information from the Web Client. I do know, this involves development and deployment .. And also maintenance of an app .. . So if you have a less intrusive solution, I’m all ears :-).

Publish & Run Current Object from VSCode – with a single command

$
0
0

I’m not writing a blog about every single (new) command in my “CRS AL Language Extension”. But this Sunday, I added an interesting one. One that I should have had created for a long time – but simply didn’t think of it, until Daniel (TheDenster.com) explicitly asked for it on GitHub.
Just imagine, you’re building an app, with many pages, and you want to build a page, test it, build the next one, test it .. . You kind have to:
– Publish the app
– Run the object after the app was published
The request was simple: combine both steps in one command in VSCode:

Publish & Run Current Object

And personally, I have been using it all day :-). So I thought – let’s just share it, so people know it exists and don’t have to find out by accidentally running into it ;-).

How to run it

Well, it’s simple, just:

What does it do .. exactly!

Well, it IS going to run a command from the “AL Language” extension first. The exact statement is this one:

The reason why I’m running the “without debugging” version, is because it’s the only way for me to mitigate that 2 browser tabs will be opened. Because the publish-step will open a tab, and then the “Run Object” (which is simply opening an URL) will open another tab.
You might say: “dude, you got a setting for that in the launch.json”. And you are right. In the launch.json, we can set the launchbrowser to “false”:

But …
This only works with the command “publish without debugging”, not with the normal F5. So .. that’s why I’m using this command ;-).

Keyboard Shortcut

You might have been using the “Run Current Object” already in the past.

Well, this still exists, and has a keyboard shortcut for it. I didn’t do that for the new command, but do know, you can do that yourself! You can even take this keyboard shortcut, and attach it to this new command.
Simply go into

Find the command and press the “+” sign on the left of it

Now, press the keyboard shortcut that works best for you – you can even override another one, like I did here to override the already existing “Run Current Object” shortcut:

Now you’re done.

Enjoy!

Microsoft Dynamics 365: 2021 release Wave 1 plan

$
0
0

It’s that time again!  Recently, Microsoft released their plans for the next major version for Dynamics 365

Quite some time ago, I started with commenting on that (I actually did that because one of the features at that point (being “code customized base app”), was something that I felt needed some criticism (and still do, by the way ;-))).  So, by matter of tradition – and it being a good way for me to get myself informed – let’s go through the points, and give some comments or extra information (if any).

Is this post useful?  May be not – you can just stop reading and get all the facts here:

What I usually do, is categorize them in my favorite added features, and my somewhat less favorite features.. .  This is something you need to decide for yourself, obviously – but it is still my blogpost, so yeah, let’s just do that again ;-).

My favorite new features

Partners can add keys (indexes) to base tables and table extension tables

Oh yes – you read it right!  This came a little bit by surprise to me (I might not have been paying attention) – but a very welcome surprise ;-).  Finally we’ll be able to better tune indexes – at least in the “adding” part.  It doesn’t say much about the limitations .. but any added ability along these lines is very much appreciated!

If you want to know more about the actual scope .. Kennie explained some more on this Twitter thread:

Also the removal of indexes (also very necessary in tuning-scenarios) will not be part of this release yet – but at least it’s in scope ;-).

Report extensibility

Again .. HUGE!  Finally!  Finally we’ll be able to extend a report in stead of copy-ing it to our own range, and substitute it.  This will tremendously simplify customizations in PTE and OnPrem.  Awesome! It was definitely one of the frustration points of my devs.. .

Dimension corrections (for G/L entries)

This is one of the Application enhancements that Microsoft worked on – and an important one.  I think every one of us at some point either had to correct dimension postings, or create some solution that users could do that themselves.  Nice one!  At least, we’ll be able to correct dimension on the General Ledger Entries (not the source docs .. but it’s a big step forward!)

Report API allows passing the layout needed for report execution

This is actually quite a nice addition, and will definitely help to (more) adopt the custom report layouts feature!  Nice idea!

Client performance improvements

Performance is always a big topic.  Especially in the cloud.  I remember a big Twitter thread recently that basically questioned again the performance in the cloud .. and the huge difference compared with OnPrem.  So I look forward to ANY kind of performance improvements!

The improvements in this topic are somewhat limited, though – they basically applied the same optimizations as we’ve seen regarding factboxes, but now also on Role Centers, where parts will only load when shown (so if you’d scroll down).    That said – the role center is an important page to improve nevertheless ;-).  I do hope though there are more performance improvements in the queue.. something in the line of better (and more intelligent?) usage of the “SetLoadFields” principle in various scenarios.

Usability enhancements for the Business Central web client

It’s a general title, but when I saw the first item in the list of improvements, I already liked it a lot: “Double-click a record in a list”.  Adding “intuitivity” .. cool!

All Features

Again, the rest of the points seem – in my limited world – somewhat less important, simply because we either didn’t miss it, didn’t think of it, solved it already differently (like the printing features), .. or anything else.  I sure think in most cases, and for many partners, they absolutely have value – if not more than what I listed above!

I decided to have the complete list below, so you’ll find whatever above, also below, divided in the categories that Microsoft has put them in – some with a bit of comments of my own ;-).

Administration

Microsoft did some improvements for partners to support heir customers, to admin their tenants and so on.

Application

It seems that there are quite some new application features .. That’s nice to see. Many people have been wanting more application-updates.  Why there or in my “less” favorite section – well – I’m a developer ;-).  So .. not really my place to have opinions ;-).

Better with Microsoft 365

What Microsoft means with this is simply the collaboration between other Microsoft 365 service.  In this release, mostly Teams, Word and cloud printing.

Country and regional

Yep, since this version, even more countries will be able to use Business Central:

Microsoft Power Platform

Power Flower.  This must be my personal most underrated topic .. something I really need to start diving much more into than I already did.  I will .. at some point .. I promise .. :-/

Modern Clients

In terms of the “Modern Clients” (web, tablet, phone), Microsoft focused on performance (I like it!), usability and printing.

Modern Development Tools

And oh yes, also for developers, there are improvements :-).  Mostly mentioned above.

Onboarding

Which API’s are available in my Business Central environment?

$
0
0

Here’s a short post with a small tip about something a lot of you probably already know. How’s that for an intro  .

If you remember these posts:

Then you know there’s quite a lot of information .. just under your fingertips of the web client.

And that’s also the case for API information. Because really .. figuring out het available API’s in your system isn’t that easy at first sight. It is easy when you know where to look, though.

Well, if you want, you can get that info from a system-table. Namely table “API Web Service”, which is table 2000000193. So, if you would add “?table=2000000193” in the URL .. you’d get a list of all available APIs :-).

At least … if you’re working OnPrem. For some dark reason, I (admin) am not allowed to read that table in SaaS .. .

I wonder why .. I really am … . If anyone has a clue why – please put it in the comments.

Does that mean there is no solution in SaaS? Can’t I list all API endpoints simply from the web client? Well .. still yes, but a little bit with a detour. In fact, it was the API guru AJ that gave an alternative table that also has quite a lot of metadata: namely table “page metadata” (2000000138). If you filter the data on pagetype “API”, you get almost exactly the same as with the “API Web service” table – although, only pages, not queries – but at least it works in SaaS.

But then you might wonder .. Isn’t there a table “query metadata” that I could use as well? Sure, that would be table 2000000142 :-). But … that one is again only available OnPrem for another dark reason :(.

Last but not least, you might wonder if there was an API way to get to all APIs. Yep! And it was again the API guru himself that showed me this undocumented feature. The URL you’ll need for this is:

https://api.businesscentral.dynamics.com/v2.0/{{tenantid}}/{{env}}/api/microsoft/runtime/beta/companes({{CompanyId}}/apiRoutes

It will basically give you a list of the “routes” to the different APIs (shows the publisher/group/version):

So, let’s say we’ll take the last entry. To form a decent URL, it’s simply:

https://api.businesscentral.dynamics.com/v2.0/{{tenantid}}/{{env}}/api/waldo/trainings/v1.0

Which will, in its turn, give me a list of “API entities” that I can use for this publisher/group/version.

That’s it. Thanks AJ for helping to make the post a bit more complete ;-). Enjoy!

Setting up docker containers with apps from AppSource

$
0
0

For a long time, we have been looking for a solution to be able to set up a docker environment, and then download the necessary apps from AppSource.

Why?

For different reasons. Like, to set up a development environment. I know we can do that in an online sandbox, but .. we’re already doing this on docker for OnPrem development, so if we could simply set up dev environments for any kind of development we do – in a uniform way – including for SaaS development – well, docker would be my choice. I mean: a developer shouldn’t care where the app ends up – he just needs to create his feature, and then the CI/CD pipelines should take care of the rest. Where it is deployed .. why should a developer care?
And … since we’re talking about CI/CD pipelines– that is actually the main reason why we need to be able to set up docker containers with AppSource apps. Because that’s what we need to validate. Against multiple versions of those apps as well, by the way! And running pipelines against online sandboxes is simply not a valid, structured, scalable approach for the future. We need docker for that, and we need DevOps to be able to get to the AppSource apps “somehow”.

So .. how do we do it?

Well, you don’t. There is no way you can install AppSource apps into docker containers at this point. The only way we can do somewhat decent pipelines against those AppSource apps, is to try to contact the companies from those apps, and pray they are willing to give us runtime apps (in a decent timeframe), which we need for symbols. But that’s nowhere near convenient, let alone scalable.

So .. did you just waste my time?

No. We know Microsoft basically needs to make that available. Like .. they could “simply” provide an API where we can download the runtime apps (which, I believe, they can even build themselves as well – or at least they could make it a requirement). But .. they might need some “convincing” that we need it (like .. yesterday ;-)).

Tobias has been so kind to create an idea for it, which we can vote for! And here it is: Microsoft Idea · Ability to downloaded a runtime package for a given version from AppSource

The title of this post might have falsely excited you a bit. And that was the intention. Just to get you to vote
So – please vote :-)!

So .. I did a stream ..

$
0
0

Not too long ago, I asked about opinions on whether I should just keep blogging, or if I should also jump on the streaming-wagon like many already did .. and start streaming content about business central. In fact, that poll is still on my website, and today, it says this result:

So, a vast majority isn’t really waiting for me to bring out any streaming videos :-).

But .. in all honesty .. I was too curious to try out this streaming thing as well – and in my opinion, I do think that some topics are more useful to stream about, than it would be to blog about.

So .. for testing, I did a Stream ;-). Which means: I have a youtube channel which now has 1 video :-): https://www.youtube.com/channel/UCQtGYYblNipjUcA_QmY6dWw

The stream itself, you can find here:

I set up a basic build pipeline for Henrik‘s new Open Source tool “BC AL Toolbox”. Something definitely worth to check out!

Conclusion: I’m going to have to work a bit on the quality of about everything. But other than that, please tell me – what do you think? Wasted time? You want to see more? If you have topics in mind – I’m all ears. Use the comments below this post, I’d say ;-).

Business Central Development Life Hacks

$
0
0

I haven’t exactly blogged much lately, but that doesn’t mean I haven’t been active for or in the community. You might have seen my previous blogpost for example, where I explored the wonders of streaming (which is for a near future to pick up, by the way ;-)), I also have contributed a few sessions on a few occasions.
One of these occasions was “Dynamics Con” – a free conference that was organized for the second time. You might remember my blogpost about it of when it was completely new: New Upcoming Conference: DynamicsCon.
I’d like to highlight some points of my session in this post. You can find the YouTube recording at the end of this blogpost.

The Topic

During the Spring 2020 conference, I saw a session “Business Central Life Hacks” by Shawn Dorward (which has a blog “https://lifehacks365.com/” – how fitting is that ;-)) – which focused on functional life hacks. This inspired me of doing a developers-version of it. So I asked him if I could “steal” his title, and go for “Business Central Development Life Hacks“. I submitted the session title, and I was lucky enough to have it voted high enough to be selected as a session.

The Content

I wanted to focus on the not-so-obvious topics this time. So I tried to explore new extensions, new shortcuts and so on. I ended up with about 6 hours of content, that I had to cut back to about 40 minutes. This is what I covered (with links to the corresponding time in the video)

VSCode Shortcuts – (video)

I shared quite some shortcuts that I use all the time within VSCode, and there was quite some interest in an overview. Well, the entire session was done in VSCode, and the readme you can vind in my repo on Github. Also the shortcuts:
https://github.com/waldo1001/DynamicsConDemoMarch2021/blob/main/DemoScripts/Demo.md.

Now go and learn ‘m by heart ;-).

Searching – (video)

We probably all used quite some search-functionality within VSCode. I do believe though that there is more to the “Searching” than the obvious. Like saved search results, regex searching, and so on. Very useful ;-).

Multiroot Workspaces – (video)

You must know by now I am a Multiroot-workspace-fan, a “multirootie” if you will ;-). I walk you a bit through the advantages you have when working multiroot.

Settings – (video)

I didn’t spend too much time on “settings”. I have loads of settings, but I just wanted to share a bit on how you can improve the UX for al development (in terms of visibility of the files in search and the explorer), and even synchronize settings on multiple PCs.

Snippets – (video)

Of course ;-). If I talk VSCode, I usually talk snippets as well. In this case though, I talked about on how you can disable snippets ;-). Yep, you ARE able to disable snippets from any extension, to not clutter your intellisense.

Extensions – (video)

And yes, I was able to find two more VSCode Extensions that are (in my opinion) interesting for AL developers:

  • Error Lens, which gives you information about errors and warnings within your code.
  • Docs View, which gives you documentation of the current selected statement, method, .. In a separate windows, which I found quite useful in quite some cases.

What is your opinion? Should I add them to the “AL Extension Pack“? I probably will ;-).

MSDyn365BC.Code.History – (video)

The “hack of all hacks” in my opinion: the MSDyn365BC.Code.History project from Stefan Maron. One of my favorite community contributions which I seem to be using all the time. I wanted to share some tips that Stefan didn’t show yet on his blog: to browse through the project with VSCode (of course ;-)).

Using Tasks – (video)

During the questions, I had some time to share yet another piece of content that I had to remove from the content before, and that is the usage of “VSCode Tasks”: a configurable ability to run PowerShell (and other tasks) from the Command Palette. I use it all the time .. :-).

Something Different

As you might have noticed if you saw the entire video – I wanted to do it a tad different. In stead of switching to PowerPoint all the time, I decided to use VSCode as my PowerPoint, and use an md (MarkDown) file, and VSCode BreadCrumbs to navigate through the agenda of the session. Personally, I liked how this was going – but .. did you? Any feedback is always appreciated.

The session

So now you have all the references that I used, the project, the video at the specific times, the links, .. So all that’s left for me to do is to share the actually video :-).

Enjoy!


Design Considerations for Upgrade Codeunits

$
0
0

Recently, in our product, we enabled support for the new “Item References” in Business Central. Basically meaning: when upgrading to the new version of our product, we wanted to:

  • Make sure our code supported the new “Item Reference” table in stead of the old “Item Cross Reference” table
  • Automatically enable the “Item Reference” feature” (necessary, because when your code depends on this new feature, it has to be enabled ;-)).
  • Automatically trigger the data upgrade (which basically transfers all data from one table to the other)

Obviously, this is all done using an upgrade codeunit. And this made me realize that there are some things that might not be too obvious to take into account when doing things like this. So .. blogpost on upgrade codeunits  .

The base mechanics

As such, an upgrade codeunit is quite simple. It’s a codeunit that is run when you’re upgrading an app from version x to version y (which is higher, obviously). In this codeunit, you will typically move data from one place (field or table) to another.

It’s a codeunit with the subtype “Upgrade” which basically enables six available triggers.

OnCheckPreconditionsPerCompany/ OnCheckPreconditionsPerDatabase
You can use the “precondition” trigger to test whether an upgrade is possible or not. If not: raise an error, and the upgrade process will be rolled back, and the previous app will still be available (if you’re running the right upgrade procedure ..).

OnUpgradePerCompany / OnUpgradePerDatabase
In the “OnUpgrade” trigger, you’ll typically put the upgrade code

OnValidateUpgradePerCompany / OnValidateUpgradePerDatabase
And in the end, you’d be able to validate your upgrade. Again: if not valid (let’s say, if the field still has content), you can again raise an error to rollback the upgrade process.

Avoid running the upgrade multiple times

To track whether an upgrade has run or not, Microsoft created a system that they call “Upgrade Tags”.

Upgrade tags provide a more robust and resilient way of controlling the upgrade process. They provide a way to track upgrade methods have been run to prevent executing the same upgrade code twice. Tags can also be used to skip the upgrade methods for a specific company or to fix an upgrade that went wrong.

Microsoft Docs

Son, you’ll have to take the upgrade tags into consideration, which means:

  • Create/manage a unique tag for every single upgrade method (usually including a companyname, a reason and a date)
  • Do NOT run the upgrade if the tag is already in the database
  • Add the tag in the database when you ran the upgrade
  • Add the tag in the database when you create a new company

The latter, if often forgotten, by the way.. . But obviously very important.

This, my friends, calls for a template pattern (if I may call it that – people have been quite sensitive about that word  ) for an upgrade codeunit. But let’s leave that for later, when I talk about – yes AJ – SNIPPETS!

Up until now, you’d be able to find the info in the default Microsoft Docs documentation , which is already quite elaborate .. but not elaborate enough.

Don’t remove data!

Consider this:

  • The base app upgrades data you depend on
  • Or an ISV product upgrades data you depend on
  • Let’s say that dependency is a field that you added in that (now obsolete) table, and has to be moved to your new field.
  • Or it could also be some business logic, where you depend on a value of a field which is not completely obsolete, and handled differently

In any case – it might very well be that the depending app will need this data to do whatever: move its own fields out, or move default data to new extension data, or .. Whatever.

In other words:
You (as Microsoft or as ISV) simply can’t assume nobody needs the data anymore. You really can not.

In other words:
Don’t delete the data. Move it to another field/table, make the original field/table obsolete. But that’s it! Keep the data and don’t kid yourself you need to delete it from the obsolete table, just because you think that’s cleaner. I can’t stress that enough! Just assume that in the upgrade process, there will be a dependent app that will be installed after your app, so which will run the upgrade after you as well. It will need the data. Keep it. Just keep it.

Don’t delete it like we did  . Yep, we didn’t think about this, and we messed up. And we had to redo the upgrade. No harm done – we noticed quite fast – but still. I hope I prevented at least one (possibly major) problem with this blogpost ;-).

Preserve the SystemId!

Another not-so-obvious thing but in my book VERY important, is the SystemId. When you’re transferring a complete table (which in my case, was the “Item Cross Reference” to the “Item Reference” table, think about transferring the SystemId as well!
I mean, the SystemId might be the primary key for anything Power-stuff, or any third-party application that is interfacing with your system through APIs. When you would simply transfer like this:

ItemReference.Init();
ItemReference.TransferFields(ItemCrossReference, true);
ItemReference.Insert();

It would create a new SystemId for every record in the new table. Basically meaning all these records would be “new” for any interfacing system.

Not.Good!
And guess what – this was the version of Microsoft which I was lucky enough to catch in time. In the meantime, Microsoft has fixed it – but I do hope they’ll remember for any other table in the future that will be transferred to a new table.

The fix is quite simple:

ItemReference.Init();
ItemReference.TransferFields(ItemCrossReference, true);
ItemReference.SystemId := ItemCrossReference.SystemId;
ItemReference.Insert(false, true);

Just set the SytemId, and insert with the second “true”. Didn’t know this second boolean existed? Well, here is the official docs: Record.Insert Method (it turns out there are different pages about the insert method – it took me a while to find the right one.. ).

Snippets

As promised, I have some snippets for you, and they are already in “waldo’s CRS AL Language Extension“. The three snippets concerning upgrades are:

  • tUpgradeCodeunitwaldo
  • tUpgradeTableProcedurewaldo
  • tUpgradeFieldProcedurewaldo

tUpgradeCodeunitwaldo
This snippet has the base mechanics of what I was talking about above. You see the upgrade tag, the event to subscribe to to manage upgrade tags in new companies. This snippet will also create a default way to create a tag, including the current date. How cool is that ;-).

The script deviates a bit from what was described on Microsoft Docs. I just like this much better as everything about this upgrade is nicely encapsulated in one codeunit, including the event.

tUpgradeTableProcedurewaldo
This snippet is meant to be used in an upgrade codeunit, and will generate code to move all data from one table to the other. And you see already that it handles the SystemId correctly as well. Again – very important!
As the last line, you’ll find the procedurecall, which is meant to be moved in the right “OnUpgrade…”-trigger.

tUpgradeFieldProcedurewaldo
Quite similar to above, but this snippet is intended to move one field to another field. And again, the procedurecall at the end is to be moved in the right trigger.

Feedback?

That was it! If there is anything I left out, I’d be happy to know! You can leave a comment, you can send me a message, .. . I’m always happy to hear!

Microsoft Dynamics 365 Business Central Publisher program

$
0
0

The last couple of months, there has been quite some questions from ISVs (and especially the “old” ISV’s) on how to register their apps in their specific situations. Just to name a few:

  • I only have OnPrem business – and I want to create a new product. How do I get a new object range?
  • Do I still need to go through the CfMD program to certify my apps?
  • App range? RSP range? What the hell is the difference?
  • With my new app, I don’t want to go to AppSource just yet – but first OnPrem – what do I do?

So .. in short: quite some questions on anything product registration and the obligations and/or restrictions that come with it.

Well, you might have seen the announcement from Microsoft ( http://aka.ms/bcpublisherprogram) – Microsoft comes with a new program which replaces the “Registered Solution Program (RSP)” and the “Certified for Microsoft Dynamics (CfMD)” program. And not only that .. there is more to consider. Much more…

Microsoft Dynamics 365 Business Central Publisher program

What the name indicates for me is that there is clearly an “AppSource first” strategy. Why? Well, the word “publisher” is typically used for anyone that puts apps in the cloud on a marketplace. Well, sometimes also the word “author” is used – but anyway .. let me make my point here  .

Cloud first strategy

Now .. It shouldn’t be necessary to explain how many advantages there are in the cloud version of BC. I mean – first of all: all baseapp features will work in the “cloud first” (get it?  ) and OnPrem “if you’re lucky”. And .. on top of that, you get the ability of all those extra services that come with the cloud. The Power-“fluff” (I should be calling it different, I know – but that area is still too grey for me ;-)), the azure stuff, … and so.much.more. The BaseApp will have a cloud first mentality: and so should you.. .

So, for Microsoft, it basically comes down to convince/force/… the ISV mindset to be cloud first. It’s as simple as that. In short:

  • Do you have a product? Put it on AppSource.
  • You don’t need it on AppSource but only OnPrem? Still, put it on AppSource, in order for you to be able to implement it OnPrem.
  • You don’t have this cloud first mindset? Then it might happen that you’re going to have to pay fees.

Wait … fees? What??

Are we being “punished” for selling OnPrem only?

Well .. If you will only do Onprem business, and you don’t register your apps on AppSource: yes, you’re going to have to pay a fee. Is that being “punished”? I don’t look at it that way. I look at it the way that Microsoft tries to motivate you for doing the “right” thing, for following their strategy. To sail in the winds they are blowing … .

Do know .. Microsoft absolutely does not want to let you pay for doing your business. Not at all. The best day for them would be that they don’t have to invoice anyone for this at all .. because that would be the day that all partners finally have embraced the cloud-first-strategy.

Because, let’s be honest, not all partners have really been listening and acting on Microsoft’s “cloud-first” approach, have they? It’s not like it hasn’t been obvious though:

  • Microsoft moved “infrastructure software” to the cloud: Azure was born
  • Microsoft moved end-user software to the cloud: Office 365 with even Sharepoint, Exchange, …

Did you really think ERP wasn’t going to follow the same approach? Hasn’t that been obvious for so long already? I’m just saying .. .

In other words, anyone who has already invested so much for moving to AL (apps), for moving to a “cloud-ready” architecture, for moving to AppSource – they are good! No extra investment needed. And they are – to use Microsoft words – “rewarded by freeing them from program fees or additional test efforts, outside of what is required for publishing to AppSource“.

Who would have to pay?

To not mis phrase anything, let me quote Microsoft on what you can expect:

This program will introduce fees that will gradually increase from September 2022 onward for publishers whose resellers have sold on-premises solutions to new customers without an equivalent cloud-based solution. The fees will only be applied to sales in countries where the Dynamics 365 Business Central online service is available. Solutions registered to existing customer licenses before the program cut-off date will not be impacted by program fees. However, adding new non-AppSource solutions after the cut-off date will be impacted.

Kurt Juvyns – Microsoft

For me, that means: Any ISV, who implements (by partners or by itself) products (apps) that do not exist on AppSource, will have to pay fees to do so.

In Practice

Let us look at this practically.

  • You will still be able to do OnPrem business. Check!
    • The only thing you need to do is making sure that you register your app on AppSource as well. That’s it.
    • So .. You might want to make sure to follow the cloud rules, no? Hybrid? Hell no! CodeCustomisedBaseApp? I gues not.
  • You will be able to reuse the same codebase for SaaS and OnPrem. Check!
    • You don’t even have to. Just make sure your product is registered, and if necessary, you can branch off and include some OnPrem necessities to your code to use at your OnPrem customers. This is extra work – I would try to avoid that..  .
    • As long as your apps are registered on AppSource, and can be used in the cloud!
  • You don’t need to certify for CfMD anymore. Check!
    • That has been replaced by AppSource validation.
  • You will be able to use all registered number ranges on all environments. Check!
  • If you already have a registered number range, you don’t have to renumber to comply with this new program. Check!

What’s next?

If it wasn’t clear yet, let me make it clear in my own words:

  • If you haven’t started to modernize your solution: start now
  • If you’re still in C/AL (v14 or not): start now and make sure you build your products into apps, and that it works on the latest version.
  • If you’re still hybrid, it means you’re still on v14: start now to unwind the reason why your solution is hybrid and make it real apps in the latest version.
  • If you’re having a code-customized base app: start your way to real apps now.
  • If you did your homework, and moved your product to cloud-ready solutions already – but you’re not on AppSource just yet? Well, register for AppSource. Now. It might take longer then you think (affixes, tooltips, … ) ;-).
  • Are you still stuck with all your dll’s, which makes your product OnPrem only? Make your way to Azure Functions or another technology, to make sure you can register your apps on AppSource.

That’s all I could come up with :-).

If you need help – there are people that can help you. If you think I can help you, just contact me – if I have time, I will – but there are also dev centers that are specialized for helping/coaching partners to make this move to the cloud. More info you can find here: Find a Development Center (microsoft.com)

Questions?

I’m sure you’ll have a ton of questions, depending in what situation you are. I don’t know if I answered any – but some of the questions I had a few weeks back, are sure answered for my own personal situation.
But still – IF there are questions, you can reach out to Microsoft!

Thoughts

Looking back at the focus, the sessions, the blogposts, … that I did that past years, it seems that the recommendations that I have been doing were not that far from the truth. Like:

  • Cloud-ready software architecture
  • No Hybrid (I was quite passionate about that…)
  • No code-customized base app (same passion .. This was never an option for me .. Never ever)
  • Don’tNet (which I shared in my latest 2 NAVTechDays appearances – even Vjeko agreed with me ;-))
  • CodeCops (which should all pass! Or at least most of them)
  • DevOps (imho the only way to manage in your product lifecycle – and, since you’ll be on AppSource, and you’ll have to manage your app for upcoming version – well – DevOps is going to be crucial ;-))
  • No migration but rebuild (I have been advocating this, not on blog, but in any conversation that I have been having with colleague-partners asking me on “what do I do”. I’m very happy to have gone the “totally rebuild” way, not having to pay any attention to refactoring, just a complete rewrite of the product. It gives you all the freedom in the world).
  • ..

Because, you know, if you comply with all this, the step to AppSource is pretty easy.

I hope this was useful!

Stabilize your Job Queue – a few tricks

$
0
0

For a few people, this will be a very “current problem”, for other, that have suffered through years of “unstable” NAS and Job Queue for that matter .. this upcoming blogpost might be interesting.
The primary reason for this post is an issue that – after quite some debugging – seemed to be an issue introduced in version 17.4 (and currently also v18.0). But I decided to tackle the issue in a way it’s a bit more generic to deal with “unstable Job Queues” in stead of “just this particular issue”.

Disappearing Job Queue Entries

Although this post isn’t just about this current bug that haunts us in this current release of Business Central (17.6 / 18.0 at the time of writing) – I do want to start with that first, though. It has to do with Job Queue Entries that were disappearing for no apparent reason. I traced it down to this code change in the default app.

In a way: it happened quite frequently (multiple times a day) that recurring job queue entries were disappearing .. which obviously prevents the execution of the task at hand.
There are some prerequisites to this behavior, though. And you can read them actually quite easily in code:

  • They must be part of an existing Job Queue Category
  • There must be multiple Job Queue Entries within that category
  • It must be a stale job, which means:
    • It must be “In Process”
    • Although it shouldn’t have an “active” TaskId (which kind of means, that it shouldn’t exist in the Task Scheduler tables).

That’s a lot of “ifs”. But .. in our case .. it happened. Frequently. We use categories to execute jobs in sequence and prevent locking or deadlocks.

Good for us, the issue is solved already in vNext (Minor). This is going to be the new version of the code:

Image

But ..

This also tells us something else. I mean .. Microsoft is writing code to act on certain situations which means that these situations are expected to happen:

  • Task Scheduler might “lose” the Task Id that it created for a certain Job Queue Entry
  • When a Job Queue is InProcess, it might happen that it “just stops executing”

Or in other words: the Job Queue can act unstable out-of-the-box, and Microsoft is trying to mitigate it in code.

There are a few good reasons why the Job Queue can act somewhat unstable

Before you might go into hate-and-rant-mode – let me try to get you to understand why it’s somewhat expectable that these background stuff might act the way it does. Just imagine:

  • Reconfigure / restart a server instance – may be while executing a job
  • Installing an app with different code for the job queue to be executed – may be while executing a job
  • Upgrade-process
  • Kicking a user – may be while executing a job

Some of which we already noticed some instability just after any of these actions. And probably, there are many more.

So, yeah .. It might happen that queues are stopped .. or may be frozen. Let’s accept it – and be happy that Microsoft is at least trying to mitigate it. I’m sure the code in vNext is for sure a (big) step forward.

But .. may be there are things that we can do ourselves as well.

Prevent the delete

As said, above issue might delete Job Queue Entries. If you don’t want to change too much of your settings (like remove all categories until Microsoft fixed the issue) to prevent this, you might also simply prevent the delete. How you can do that? Well – just subscribe to the OnBeforeDeleteEvent of the “job Queue Entry” table ;-). Here is an example:

But .. as said – this only solves this one issue that is current – but we can do more… and may be even look somewhat in the future and make the Job Queue act more stable for our customers …

Restart just in case

One thing we learned is that restarting a job queue never hurts. Although “never” isn’t really the right word to use here, because restarting may come with its caveats.. . But we’ll come to that.

We noticed that we wanted to be able to restart job queues from outside the web client – in fact – from PowerShell (Duh…  ). Thing is, restarting a Job Queue Entry is actually done in the client by setting and resetting the status by simply calling the “Restart” method on the “Job Queue Entry” table.

If we decouple this a little bit .. I decided to divide the problem in two parts:
– We need to be able to run “any stuff” in BC from PowerShell
– We need to be able to extend this “any stuff” any time we need

Restart Job Queues – triggered from an API

I decided to implement some kind of generic event, which would be raised by a simple API call .. which would kind of “discover and run” the business logic that was subscribed to it.

So, I simply created two publishers

And trigger it from this “dirty workaround” API  (I actually don’t need data, just the function call from the API).

As a second step, I simply subscribe my Job Queue Restart functionality.

So now, we can restart job queue entries in an automated way.

When do we restart?

We decided to restart at every single app deploy (so, basically from our build pipeline) and at restart of services (if any). Because:
– It doesn’t hurt
– It also covers the upgrade-part
– It would also call extra “OnSetDatabase” code that might have been set up in the new version of the app

Language/Region might be changed

One caveat with an automated restart like that would be that the user that does the restart – in our case, the (service) user that runs the DevOps agent – will be configured as the user that executes the jobs. What this might have as a consequence is that the language and/or region might have changed as well. And THAT was a serious issue in our case, because all of a sudden, dates were printed with US format (mm/dd/yy) in stead of BE format (dd/mm/yy). You can imagine that for due dates (and other, like shipment dates), this is not really a good thing to happen.

Long story short, this is because there is a field on the “Scheduled Task” that is leading which format the system will use in that background task. That field is “User Format ID” (don’t ask how long it took for me to find out…).

So, we decided to override this value whenever a JobQueueEntry is being enqueued … .

You see a hardcoded value – we didn’t literally do it like this – it’s merely to illustrate you can override the language/region on the background task ;-).

Combine when possible

I don’t know that you know, but you can “Clean Up Data with Retention Policies” – Business Central | Microsoft Docs

A big advantage of this functionality is that all your jobs to clean data from your own Log tables, can be hooked into this functionality – and you basically only have one job in the Job Queue entries that manages all these cleanups. Only one job for many cleanups.

My recommendation would be to apply this kind of “combining” jobs whenever possible and whenever it makes sense. 5 jobs have less chance to fail than 50.

We were able to reduce the amount of Job Queue entries by simply call multiple codeunits within yet a new codeunit – and use that new codeunit for the Job Queue ;-). Sure, it might fail in codeunit 3 and not call codeunit 4 for that matter – so please only combine when it makes sense, and it doesn’t hurt the stability of your own business logic.

Manage your retries

Just a small, obvious (but often forgotten) tip to end with.

Depending on the actual tasks that are executing – it might happen that things go wrong. That’s the simple truth. If you are for example calling an API in your task, it might happen that for a small amount of time, there is no authentication service, because the domain controller is updating .. or there is no connection, because some router is being restarted. And in SaaS, similar small problems can occur to any 3rd party service as well …

It is important to think about these two fields:

  • Maximum No. of Attempts to Run– make sure there are enough attempts. We always take 5
  • Rerun Delay (sec.) – make sure there is enough time between two attempts.

It doesn’t make sense to try 3 times in 3 seconds, if you know what I mean. For example, a server restart, or a router restart will take more than 3 seconds.

I would advice to make sure there are 15 minutes of retry-window: 5 retries, 180 seconds delay. This has a massive impact in stability, I promise you ;-).

Conclusion

Let’s not expect that the Job Queue will run stable all the time, everytime, out-of-the-box. There are things we can do. Some are small (settings), some involve somewhat more work. I hope this helped you at least a little bit to think about things you could do.

A somewhat easier way to generate a DGML Dependency Graph of your Business Central extensions

$
0
0

You might have read my post “Visualize app.json dependencies in VSCode (using GraphViz)” where I explained “another” way to generate a dependency graph. Another than what? Well – other than the DGML that was just announced on Microsoft’s Launch Event of Business Central 2020 Wave 2 as being “on the drawing board”.

Well – today Stefano Demiliani found out that the first bits are already on our system. He did that in his blogpost: “Dynamics 365 Business Central: creating a DGML dependency graph for your extensions“, for which I don’t really have anything to add.

Except for one little thing…

I don’t think a lot of people are that familiar with running the alc.exe .. or handling any of its parameters just from the top of their heads. So I decided to dive a bit in TypeScript again to make that part a bit easier for you.

And the “CRS: Compile DGML” command was born as part of the “waldo’s CRS AL Language Extension“:

The function will:
1) Find your working directory
2) Find the alc.exe
3) Find out the symbol path from your settings
And run the command that needs to run to generate the DGML of your current project.

This is obviously an alpha version.. and I’m pretty sure it will never become a beta as I expect Microsoft to come with their own command or settings (if not yet already).

But for now .. Enjoy

Documenting your Business Central (custom) APIs with OpenAPI / Swagger

$
0
0

For reasons that are not too important, I am trying to find a way to “describe my custom APIs”. You know. You’re at a project, you had to implement an integration with a 3rd party application, and you develop some custom APIs for it. When done, you have to communicate this to the 3rd party for them to implement the integration, so you need to pass the documentation, and kind of help through “how to use Business Central APIs”, and a description of the responses, requests and possibilities you have with “typical” Business Central APIs.

Disclaimer

Now, before I continue, let’s make one thing absolutely clear. I’m by far not an API god, guru, master or whatever you call an expert these days ;-). I’m just an enthusiast, that tries to find his way in the matter .. . The reason for this blogpost is merely because recently I got the question “how do I communicate this API that I just made with the customer that has to use it?“.

Let’s see if we can find a decent answer.

When you get questions like this, it makes sense to try to find out what they like to get as an answer. What would I expect as a decent, useful description of an API that I’d have to use?

OpenAPI / Swagger

And we can’t deny that when we need to integrate with something, we LOVE to get some kind of OpenAPI / Swagger description. As such: a document (or set of documents) that defines/describes an API following the OpenAPI Specification. It is basically a JSON or YAML file, that can be represented in some kind of UI to make it very readable, testable, .. . An industry-accepted-standard sort to speak..

There is a lot to say about this, and I don’t want to bore you with the specifics. I’ll simply give you a decent resource for you to read up: OpenAPI Initiative (openapis.org) / OpenAPI Specification (Swagger) . And a screenshot of what it could look like, if you would have some kind of graphical UI that makes this JSON or YAML much more readable:

As you can see – a very descriptive page, with a list of API endpoints, a description on how to use them, and even a way to try them out, including details on what the responses and requests would have to be. Even more, this screenshot also shows the abilities: like, I can only read the accounts, but I can read, update and delete trialbalances (if that makes any sense .. ).

But .. this does not exist out-of-the-box for Business Central. And this definitely doesn’t exist for custom API’s out-of-the-box, as .. you know, they don’t exist “out-of-the-box” ;-).

So .. what CAN we do out-of-the-box?

Well, the most basic description we can give to our customer/3rd party is what we call an “edmx” (Entity Data model XML). Which basically is an XML description of your APIs. You can simply get to the edmx by adding “$metadata” to the url. Or even better: “$metadata?$schemaversion=2.0” to also get the enum-descriptions.

From the docs: Use OData to Return and Obtain a Service Metadata Document – Business Central

Well, I can tell you – that’s not an OpenAPI – and in general, the developers at the customer are going to be a bit disappointed in that kind of documentation. And they are not that wrong.

So .. Let’s see if we can convert that edmx in something more useful: an OpenAPI description of our (custom) APIs! I reached out to the twitterz.. and funny enough, it was my own colleague Márton Sági who pointed me in an interesting direction.

Márton pointed me to this OpenAPI.NET.OData library (and Sergio acknowledged it was a good direction), which, after some investigation, happened to have some kind of converter as well. So .. I tested it out, and it seems something useful comes out of it. Let me briefly explain what I did (after quite some investigation – this really isn’t my cup of tea  ).

  1. I cloned the repo and opened the solution in Visual Studio
  2. I built the OoasUtil project
  3. I used the exe to convert the edmx that I saved to a file

This was the command that I used:

.\OoasUtil.exe -y -i "C:\Temp\edmx\test.edmx" -o "C:\Temp\edmx\test.yaml"

More info on how to use this OoasUtil.exe you can find here: OpenAPI.NET.OData/src/OoasUtil at master · microsoft/OpenAPI.NET.OData.

The question is – what do I do with that yaml-file, right? Well, let me get back to where I started: this OpenAPI Specification, is a some kind of default way to describe an API, and the language used is either JSON or YAML. This converter can create both. And this is just the next step to get to that graphical UI that I was talking about.

Now comes the easy part .. . You might already have googled “Business Central OpenAPI” .. and it will have you end up on a small page that explains on how to open the standard APIs as a yaml. In fact – there are two pages: there is a page for v1 and there’s a page for v2:
– V1: OpenAPI Specification – Business Central | Microsoft Docs
– V2: OpenAPI Specification – Business Central | Microsoft Docs
The pages are essentially the same, but you’ll see that Microsoft only has prepared this v1 yaml for you (and even that one isn’t downloadable anymore it seems  ). The download isn’t available for V2. But guess what – we just created our own yaml! And now you know how to do that for any kind of API as well.. :

  • Default Microsoft APIs v1
  • Default Microsoft APIs v2
  • Microsoft Automation APIs
  • Runtime APIs
  • Custom APIs

What’s more interesting on these Docs pages, is that it explains on how to display it, and even edit it. Locally, you can use VSCode extensions to edit the API documentation (essentially: the yaml file) and display it with SwaggerUI. There are previewers, there is a way to create your own node app, … .
You could also use an online service, like SwaggerHub to basically do the same: display the yaml in a UI friendly way, and edit it as well. I tried both – both are very easy to do so. I’m not going to talk about how to do that, as it’s explained good enough on the Docs above. All you need is the yaml, and you’ll be able to use that to visualize it.

To make it somewhat easier for you …

… I created a small repo. And you can find it here: waldo1001/BusinessCentralOpenAPIToolkit: Business Central OpenAPI Toolkit

I tried to explain in the readme on how to use it. What I did is simple. I basically included …

  • .. the OoasUtil assembly so you don’t have to clone & compile it anymore from Microsoft’s github. It’s “just there” and the conversion tool is available as an exe.
  • .. scripts to make it somewhat easier to get the edmx from Business Central APIs. We need this in a file, not as a result in some browser. This script (1_.edmx.ps1) will output it to a file.
  • .. the script (2_convertEdmxToYaml.ps1) to convert this edmx to an OpenAPI Specification in yaml
  • .. the javascript to create a SwaggerUI (node app) based on the yaml
  • .. the scripts (3_StartSwagger_.ps1) to run the node app in node.js (you’ll have to install node.js of course).

As an end result, you should have a SwaggerUI-representation with all the goodies you’d expect, all applied to your custom API (like you see in the screenshot above). If you set up the servers in the yaml, you’ll even be able to interactively try the APIs as well (tested and approved  ).

Plans

Our plans doesn’t end here though. Just imagine:

  • All your customers are on your own docker image
  • Within this docker image, you also provide some WebApp (like this node app) that can display this SwaggerUI
  • When you deploy your app, an up-to-date yaml is begin generated, and your SwaggerUI is being updated with it

This way, you’d kind of like have an up-to-date OpenAPI documentation of all your custom APIs available for your customer at all times. It’s work in progress in our company – and definitely worth the investment in my opinion.. .
Of course, this is for an OnPrem scenario, but kind of the same could be done for BC SaaS. It would just be a matter of deploying the yaml to some kind of SwaggerUI-WebApp (on Azure?).

Are there alternatives?

As I said – I am by no means any kind of expert in this matter. In the same twitter-conversation, Damien Girard recommends to use “OData Client Library“. I didn’t investigate that yet. Or Stefano mentioned that it’s possible to “embed APIs on Azure API Management” to have Swagger out-of-the-box. I didn’t investigate that either. @Stefano: may be it’s interesting as a blogpost at some point?  .

Business Central Default APIs are not localized – or are they?

$
0
0

Good news.

You’re probably all familiar with the great work Microsoft is doing on providing us a huge set of useful out-of-the-box API’s. No? Well, you can find all the information right here: API(V2.0) for Dynamics 365 Business Central – Business Central | Microsoft Docs

For Belgium, these have been useless in many situations though. Simply said: the Belgian localization has some fields that kind of replace default fields (VAT Registration Code). When I contacted Microsoft about it, the statement was simple:

The Business Central Default APIs are not localized

Microsoft

And I can understand, most honestly. So I didn’t expect anything to change, and started to abandon the default APIs completely and implement custom APIs in all situations.
In fact – since the default Business Central APIs are “never” localized, and chances are that you ARE in a localized database – that means there is always a chance there is something wrong/missing from the default APIs. So “just” using them and assuming everything will just work out-of-the-box is may be not the best assumption to make.. .

Recently, my eye caught this change in 18.1:

So – it seems that for the Belgian localization, Microsoft made a workaround to foresee this “Enterprise No.” fields in stead of the “VAT Registration No.” – which kind of like would mean that the default Business Central APIs just got a whole lot more interesting for Belgium.

And may be not only for the Belgian localization. I don’t know. I notice quite a lot of changes on the V2 APIs since the 18.0 release:

It makes sense to browse through the code changes between different versions of Business Central. This is basically how I figured out the above. If you’d like to know how? Well, here’s a video where I show that using Stefan Maron‘s tool ;-).

Does this mean – let’s all use the default APIs?

Well, for me not just yet, but at least for Belgium it’s worth considering again – at least in some cases. I know about some use cases where it’s absolutely necessary to be able to connect to BC without any kind of custom API. All I can say is: make sure the data is like it’s supposed to be, test it thoroughly in the situation of that specific customer, .. basically meaning: you should be making a conscious choice in stead of just assuming all is well ;-). That’s all.

The complexity of complex return types

$
0
0

Since Business Central 2021 Wave 1 – v18, if you will – we are able to return about ANY type from a procedure. For many languages, that’s the most normal thing, but for Business Central’s AL language, it was not. So, at the time this was announced, many people were ecstatic and talking about it in sessions, on blog posts, on Twitter and so on.

And me as well. In fact, I did a session on the Dutch Dynamics Community, and one for Directions. The one for Directions was recorded and shared on the Directions Community Portal, and you can find the recording here: Webinar Preparing For The Future With Business Central . At 45:19, I start explaining the “Complex Return Types”.

Well .. since you can simply watch the recording, and since it’s much easier to explain it in a video in stead of writing a blog post, I’m going to let you watch the recording for the specifics around what “Complex Return types” are for Business Central.

What I wanted to focus on in this post – and what I didn’t realize at the time I did those sessions – was the reason behind the fact that it seemed to be so hard to really work with records. So .. let me try to explain why it’s behaving the way it does.

The “problem”

Well, consider a function that would return a filtered set of records:

It would make it so easy to get all subrecords from some kind of main record, like in this case:

SalesHeader.SalesLines().

Thing is – how do you work with it? If I get back a set of records, I most likely would either like to filter some more, and/or loop it, right? Well, I hope you can understand that this absolutely doesn’t make any sense:

Every next would re-get the set of records, so … endless loop it is  .

Well, in a way, this would make sense:

Although, when assigning a Rec to a Rec, it does NOT take over the filters. So, it might make sense, but no way it works, it simply would loop ALL records in the sales lines table.

I hear you thinking – no problem, there is a “copyfilters” statement, so let’s do that! Well, no, and this time, the reason is because that method – for some reason that I can’t wrap my head around – needs a “ByRef” variable .. which I don’t have as a return type, obviously:

And before you ask – same for the copy-method:

Last that I could think of is working with the “GetFilters”. Thing is, there isn’t something like “SetFilters”, only “SetFilter” (singular). So not really generic. I can imagine a way to do this with RecRef/FieldRef, like starting some kind of loop like this:

.. but in all honesty, that’s starting to kind of beat the purpose, doesn’t it? I kind of refused to dive into this much further as I couldn’t see it as performant as it should be either.. . If I’m wrong – please I’m all open for feedback!

And let’s be honest, it seems that simply removing the “byref” necessity on the copyfilters-method would solve this problem ..  .

A named return variable, or not a named return variable, that’s the question!

That’s another question we could ask ourselves indeed. And .. does it really matter? Well, I can tell you, it absolutely does matter. It’s something we wouldn’t really think of – at least I wouldn’t – but when thinking about it, it actually does make sense on how it reacts.

What I’m talking about? Well, let’s consider the difference between option 1:

And option 2:

These two functions give a very different outcome. And that’s because the return mechanism is very different in both functions.

In option 1, you actually put the filter on the variable that is being returned.
In option 2, you put it on the local var, but then an assignment of that local var happens on the return-var. So as such, you do a Rec := Rec .. which .. loses your filters. Remember?

The difference in execution can simply be shown by counting the end-result. Like this:

Option 1 will be the expected result: in my case, 1 sales line:

Although option 2 will lose the filter, so it returns a count of all Sales Lines:

So – be careful when you’d apply this to a ModifyAll or DeleteAll or something like that  .

This assignment-behaviour obviously applies to all types. Not just records. Though, I can only think of records suffering from immediate consequences .. may be there are other types that need special attention as well, I don’t know.

Conclusion

Complex Return Types are very useful. Coming up with useful patterns though, is somewhat more challenging. This one pattern for sub-records I would love to be accessible .. So I wouldn’t mind Microsoft to “fix” the ByRef need for CopyFilters.
Or if I’m missing something, I wouldn’t mind to understand what and why ;-). All feedback is always welcome ;-). I at least hope it kind of explains a bit to you as it did to me ;-).


The complexity of complex return types (updated) – Looping a record return type

$
0
0

Update for my previous post: The complexity of complex return types. (I could have just updated the post, but since that wouldn’t trigger the people that already read it, I decided to create a small update just to pin your attention to the following..

After a few comments from Marknitek and Dennis Reineck, I realized that I forgot one important ability, that actually makes it (kind of) possible to use the Record-return type, and start looping with a minimum amount of code.

The trick is to use the Get/SetView methods. In all honesty, I can’t remember I ever used it. But in this case, it does seem to work. So, it could look like this:

And you’re even able to extend the filters as well, something like:

So .. If any more feedback/comments .. who knows, a third post will pop up today as well

Protected Variables

$
0
0

“Protected Variables” might not be a familiar topic in the Business Central world. It was at least not that familiar to me ;-). So I wanted to make a small post to make this more known in the community, so that at least you are aware that it exists .. and it might give you some more extensibility options that you might not have considered.. .

The concept is simple: protected variables are variables that you can access from a dependent extension. So .. thinking about this .. it actually has only any effect on extension object, because that way, you’re kind of “in” the object. So, we’re only talking about extension-objects being “table extensions”, “page extensions” and “report extensions”. I’d normally say that you can find more information on Microsoft Docs: Protected Variables – Business Central, but quite honestly, there’s not *that* much information on there.. .

Now, you might know I’m not particularly fan of “global variables” as such, and a protected variable, is actually a global variable, that you extend outside the scope of the extension .. so as such, you make it even more global than it would be as a normal global var  (one might wonder what is “protected” about a var that’s being made available to the outside ..  ). However, I do think that these protected variables open up some extensibility-possibilities in some scenarios. Especially pages and reports. For tables, I would really doubt that when I’d need access to such a variable, that the best design for that particular feature would have been implemented.. .

How does it work?

Well, I created my own example which you can find here on GitHub: waldo1001/blog.ProtectedVariables. It has a MainApp, and a Dep(endant)App. And you can see that in the objects, I declared a normal var, and a protected one, like this:

Now, in an extension object from the DepApp, you can actually access (and change the value of) that variable, like:

While you’ll see when I’d try to use the “NormalVar” in the extension object, it’ll simply fail by mentioning that the protection level is wrong.. .

Typical Usages

When you browse through the BaseApp, you find about 264 occasions where Microsoft has declared protected variables. Obviously in the objects that are extendable. At least “mostly”. There are some codeunits with protected variables as well – which I have no idea why that is. Afaik, that totally doesn’t serve any need. If you have a clue – please put it down in the comments  .

Anyway, from a short analysis for browsing through these protected variables, you can clearly see some typical usages, like:

  • Visibility of Dimension Values on pages
  • HideValidationDialog on document tables and reports (and even codeunits .. which I still don’t understand  )
  • About all the statistics-values on statistics pages

Then, there are also a bunch of other places … which one might wonder .. why those, and not others? Like here you see a bunch of normal variables, and then a bunch of protected ones:

Well, thing is, people have been requesting numerous variables to become “protected” to serve their extension. Just look at these issues on Microsoft’s GitHub for example.

So, keep in mind this is also an option for you: if you need access to table/page/report-variables from the base app, just file your own issue, and Microsoft will respond fairly quickly.
And if you’re an ISV, you can foresee (and should think about) your own protected variables if that would make sense for your app ;-).

Open Questions

While writing this post, I did come across some odd things as well. Like the already mentioned protected vars on codeunits which I don’t have a clue for what they are there for.

But the same for protected procedures (which is not *really* the content of this post).

Well, any of those (protected vars or protected procs) in a codeunit, I have no idea why that would be useful. It doesn’t do anything really expected:

  • a protected proc in a codeunit seems to act like a local proc
  • a protected var in a codeunit seems to act like a global var

E.g. if I would have a codeunit like this in a main app:

Then, in a dependent app, I wouldn’t be able to do much with these protected stuff.. . I mean: in case of a subscriber, I can simply access both the protected var as the normal global var. And in case of the procedures, I’d not even be able to run the protected proc, while I would be able to access the global procedure, which is obvious and expected.

So, I have to conclude that the “protected” keyword anywhere other than in an extensible object (i.e. page, table or report) does not not make sense and may be is something that simply shouldn’t be available in the language.. .

If anyone can comment on this, or have any more insight – you’re more than welcome to comment below ;-). Here is a twitter-thread about the same topic as well:

The complexity of complex return types – Looping a record return type (extended)

$
0
0

Short post

Do you remember my post about The complexity of complex return types?
And the update The complexity of complex return types (updated) – Looping a record return type?

Well, since the release of Microsoft Dynamics 365 Business Central 18.4, I have a another update for you! Let’s call it an “extension” for the update  .

As I mentioned in the second post that it actually WAS possible to loop a returned record by using the GetView/SetView methods.

Since 18.4, the signature of the “Record.Copy” method is changed, where the “ByRef” is removed. So now, we could also use the more intuitive “Copy” method, to implement the loop:

You’ll need the new compiler (the latest AL Language extension (7.4.496506) seems to already support the new signature), and the application needs to be on the 18.4 platform, obviously.

Thanks Natalie, for keeping a close look at all these GitHub issues

Can you “force” VSCode extensions on your team’s PC’s?

$
0
0

You must have seen the launch of Vjeko’s extension: the “AL Object ID Ninja”. If you haven’t, well, let me make your day and show you some resources:

In short: it’s a VSCode extension that will help you manage the Object Ids in team, by reserving them (right from instellisense) in a backend, where the team is connected to. It isn’t getting more intuitive than this. And the setup is kind of a no-brainer as well, as Vjeko is managing the backend for you.

Thing is, while this is an awesome extension, it actually ONLY really works when the ENTIRE team has it installed. If one person doesn’t use it, the object ids are not being reserved, and you’ll still end up with conflicts. So .. this kind of made me ask myself again …

How can I force my developers to install certain vscode extensions when they are working for our repositories?

Well, I can already tell you: there is no decent solution as far as I am aware. So I’m going to try to make some suggestions that can help you for this .. but if someone has better suggestions or solutions, please let me know ;-).

Extension Recommendations

When searching for a solution, there is one topic you always come across on various forums, and that’s the use of the extensions.json for “workspace recommended extensions“. Recommendations. Nothing mandatory.

It’s simple, you create a file in the .vscode folder of your workspace (or as a workspace-setting), which you simply fill with a collection of extensions that you think are necessary for this workspace.

Here is an example:

The result is that you’ll get a notification every time you open the workspace (and are about to contribute to the code) and you have one of the recommendations not installed.

You could also simply go to extensions, and filter “@recommended” to see the list that’s recommended for this workspace.

Easy peasy. Just too bad it’s not a “force install”.

What we did: we created an extensions.json for all workspaces and we added it as part of our “template project” so that we’re sure that we do what we can to get the necessary extensions at the developers.

Your own company extension pack

You might have heard of the “AL Extension Pack“. A very simple, no-code VSCode extension that will install extensions that are defined in the pack. A simple concept that – when you have the pack installed – has as an extra benefit (well, if you’re ok with it ;-)) that it will also install extra apps when they would be added to the pack.

Creating such an extension is VERY simple, which you’ll have to compile to a VSIX and upload it to the marketplace for easy distribution. Here is everything you need to start: Your First Extension | Visual Studio Code Extension API. I also did a demo on how to create your own VSCode extension at NAVTechDays in 2017. You can find it here:

Well, such an extension pack, is SO easy to create. Easier than I explain above. It actually only needs a manifest (a package.json) and that’s it. Just have a look at my pack here: waldo1001/ALExtensionPack: An extension pack for th AL Language (Microsoft Dynamics 365 Business Central) (github.com) . You’ll see the only “real” file in there is the package.json. A manifest that’s very readable.

But I already have a VSCode extension for my company – can I use that to also “force” installation of other extensions?

Yes! Very easily. We too have a company-extension published on the marketplace, because it lets us publish and maintain company-related snippets that all developers automatically get when there are some added, or changed. It helps us tremendously for easily apply certain changes to the product.

So, if you also have one, and everyone has that already installed, simply add a section to that package.json like this:

Next time your developers will update the company-extension, automatically, these extensions will be installed as well.

In the extension-page in VSCode, you’ll get an extra tab (Extension Pack) that will show the extensions that will be installed alongside this one:

Conclusion

This is it. This is all I could come up with about how to make my team use certain extensions. There isn’t really a “force install” or “mandatory to use” or anything like that.
Since for the “AL Object ID Ninja” it’s so important that everyone has it installed (else, ID’s will simply not be reserved), I will probably also add something in our (validation) pipeline in DevOps to check if all Ids have been reserved indeed. If not: fail the validation with the error that he has to install the extension and reserve the ID’s (kind of like make the developer work the right way).

Viewing all 339 articles
Browse latest View live