Scott Hanselman

Open Sourcing DOS 4

April 25, 2024 Comment on this post [19] Posted in Open Source
Sponsored By

Beta DOS DisksSee the canonical version of this blog post at the Microsoft Open Source Blog!

Ten years ago, Microsoft released the source for MS-DOS 1.25 and 2.0 to the Computer History Museum, and then later republished them for reference purposes. This code holds an important place in history and is a fascinating read of an operating system that was written entirely in 8086 assembly code nearly 45 years ago.

Today, in partnership with IBM and in the spirit of open innovation, we're releasing the source code to MS-DOS 4.00 under the MIT license. There's a somewhat complex and fascinating history behind the 4.0 versions of DOS, as Microsoft partnered with IBM for portions of the code but also created a branch of DOS called Multitasking DOS that did not see a wide release.

https://github.com/microsoft/MS-DOS

A young English researcher named Connor "Starfrost" Hyde recently corresponded with former Microsoft Chief Technical Officer Ray Ozzie about some of the software in his collection. Amongst the floppies, Ray found unreleased beta binaries of DOS 4.0 that he was sent while he was at Lotus. Starfrost reached out to the Microsoft Open Source Programs Office (OSPO) to explore releasing DOS 4 source, as he is working on documenting the relationship between DOS 4, MT-DOS, and what would eventually become OS/2. Some later versions of these Multitasking DOS binaries can be found around the internet, but these new Ozzie beta binaries appear to be much earlier, unreleased, and also include the ibmbio.com source. 

Scott Hanselman, with the help of internet archivist and enthusiast Jeff Sponaugle, has imaged these original disks and carefully scanned the original printed documents from this "Ozzie Drop". Microsoft, along with our friends at IBM, think this is a fascinating piece of operating system history worth sharing. 

Jeff Wilcox and OSPO went to the Microsoft Archives, and while they were unable to find the full source code for MT-DOS, they did find MS DOS 4.00, which we're releasing today, alongside these additional beta binaries, PDFs of the documentation, and disk images. We will continue to explore the archives and may update this release if more is discovered. 

Thank you to Ray Ozzie, Starfrost, Jeff Sponaugle, Larry Osterman, our friends at the IBM OSPO, as well as the makers of such digital archeology software including, but not limited to Greaseweazle, Fluxengine, Aaru Data Preservation Suite, and the HxC Floppy Emulator. Above all, thank you to the original authors of this code, some of whom still work at Microsoft and IBM today!

If you'd like to run this software yourself and explore, we have successfully run it directly on an original IBM PC XT, a newer Pentium, and within the open source PCem and 86box emulators. 

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Updating to .NET 8, updating to IHostBuilder, and running Playwright Tests within NUnit headless or headed on any OS

March 07, 2024 Comment on this post [54] Posted in ASP.NET | DotNetCore
Sponsored By

All the Unit Tests passI've been doing not just Unit Testing for my sites but full on Integration Testing and Browser Automation Testing as early as 2007 with Selenium. Lately, however, I've been using the faster and generally more compatible Playwright. It has one API and can test on Windows, Linux, Mac, locally, in a container (headless), in my CI/CD pipeline, on Azure DevOps, or in GitHub Actions.

For me, it's that last moment of truth to make sure that the site runs completely from end to end.

I can write those Playwright tests in something like TypeScript, and I could launch them with node, but I like running end unit tests and using that test runner and test harness as my jumping off point for my .NET applications. I'm used to right clicking and "run unit tests" or even better, right click and "debug unit tests" in Visual Studio or VS Code. This gets me the benefit of all of the assertions of a full unit testing framework, and all the benefits of using something like Playwright to automate my browser.

In 2018 I was using WebApplicationFactory and some tricky hacks to basically spin up ASP.NET within .NET (at the time) Core 2.1 within the unit tests and then launching Selenium. This was kind of janky and would require to manually start a separate process and manage its life cycle. However, I kept on with this hack for a number of years basically trying to get the Kestrel Web Server to spin up inside of my unit tests.

I've recently upgraded my main site and podcast site to .NET 8. Keep in mind that I've been moving my websites forward from early early versions of .NET to the most recent versions. The blog is happily running on Linux in a container on .NET 8, but its original code started in 2002 on .NET 1.1.

Now that I'm on .NET 8, I scandalously discovered (as my unit tests stopped working) that the rest of the world had moved from IWebHostBuilder to IHostBuilder five version of .NET ago. Gulp. Say what you will, but the backward compatibility is impressive.

As such my code for Program.cs changed from this

public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();

to this:

public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}

public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args).
ConfigureWebHostDefaults(WebHostBuilder => WebHostBuilder.UseStartup<Startup>());

Not a major change on the outside but tidies things up on the inside and sets me up with a more flexible generic host for my web app.

My unit tests stopped working because my Kestral Web Server hack was no longer firing up my server.

Here is an example of my goal from a Playwright perspective within a .NET NUnit test.

[Test]
public async Task DoesSearchWork()
{
await Page.GotoAsync(Url);

await Page.Locator("#topbar").GetByRole(AriaRole.Link, new() { Name = "episodes" }).ClickAsync();

await Page.GetByPlaceholder("search and filter").ClickAsync();

await Page.GetByPlaceholder("search and filter").TypeAsync("wife");

const string visibleCards = ".showCard:visible";

var waiting = await Page.WaitForSelectorAsync(visibleCards, new PageWaitForSelectorOptions() { Timeout = 500 });

await Expect(Page.Locator(visibleCards).First).ToBeVisibleAsync();

await Expect(Page.Locator(visibleCards)).ToHaveCountAsync(5);
}

I love this. Nice and clean. Certainly here we are assuming that we have a URL in that first line, which will be localhost something, and then we assume that our web application has started up on its own.

Here is the setup code that starts my new "web application test builder factory," yeah, the name is stupid but it's descriptive. Note the OneTimeSetUp and the OneTimeTearDown. This starts my web app within the context of my TestHost. Note the :0 makes the app find a port which I then, sadly, have to dig out and put into the Url private for use within my Unit Tests. Note that the <Startup> is in fact my Startup class within Startup.cs which hosts my app's pipeline and Configure and ConfigureServices get setup here so routing all works.

private string Url;
private WebApplication? _app = null;

[OneTimeSetUp]
public void Setup()
{
var builder = WebApplicationTestBuilderFactory.CreateBuilder<Startup>();

var startup = new Startup(builder.Environment);
builder.WebHost.ConfigureKestrel(o => o.Listen(IPAddress.Loopback, 0));
startup.ConfigureServices(builder.Services);
_app = builder.Build();

// listen on any local port (hence the 0)
startup.Configure(_app, _app.Configuration);
_app.Start();

//you are kidding me
Url = _app.Services.GetRequiredService<IServer>().Features.GetRequiredFeature<IServerAddressesFeature>().Addresses.Last();
}

[OneTimeTearDown]
public async Task TearDown()
{
await _app.DisposeAsync();
}

So what horrors are buried in WebApplicationTestBuilderFactory? The first bit is bad and we should fix it for .NET 9. The rest is actually every nice, with a hat tip to David Fowler for his help and guidance! This is the magic and the ick in one small helper class.

public class WebApplicationTestBuilderFactory 
{
public static WebApplicationBuilder CreateBuilder<T>() where T : class
{
//This ungodly code requires an unused reference to the MvcTesting package that hooks up
// MSBuild to create the manifest file that is read here.
var testLocation = Path.Combine(AppContext.BaseDirectory, "MvcTestingAppManifest.json");
var json = JsonObject.Parse(File.ReadAllText(testLocation));
var asmFullName = typeof(T).Assembly.FullName ?? throw new InvalidOperationException("Assembly Full Name is null");
var contentRootPath = json?[asmFullName]?.GetValue<string>();

//spin up a real live web application inside TestHost.exe
var builder = WebApplication.CreateBuilder(
new WebApplicationOptions()
{
ContentRootPath = contentRootPath,
ApplicationName = asmFullName
});
return builder;
}
}

The first 4 lines are nasty. Because the test runs in the context of a different directory and my website needs to run within the context of its own content root path, I have to force the content root path to be correct and the only way to do that is by getting the apps base directory from a file generated within MSBuild from the (aging) MvcTesting package. The package is not used, but by referencing it it gets into the build and makes that file that I then use to pull out the directory.

If we can get rid of that "hack" and pull the directory from context elsewhere, then this helper function turns into a single line and .NET 9 gets WAY WAY more testable!

Now I can run my Unit Tests AND Playwright Browser Integration Tests across all OS's, headed or headless, in docker or on the metal. The site is updated to .NET 8 and all is right with my code. Well, it runs at least. ;)

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Using WSL and Let's Encrypt to create Azure App Service SSL Wildcard Certificates

June 27, 2023 Comment on this post [3] Posted in Azure
Sponsored By

There are many let's encrypt automatic tools for azure but I also wanted to see if I could use certbot in wsl to generate a wildcard certificate for the azure Friday website and then upload the resulting certificates to azure app service.

Azure app service ultimately needs a specific format called dot PFX that includes the full certificate path and all intermediates.

Per the docs, App Service private certificates must meet the following requirements:

  • Exported as a password-protected PFX file, encrypted using triple DES.
  • Contains private key at least 2048 bits long
  • Contains all intermediate certificates and the root certificate in the certificate chain.

If you have a PFX that doesn't meet all these requirements you can have Windows reencrypt the file.

I use WSL and certbot to create the cert, then I import/export in Windows and upload the resulting PFX.

Within WSL, install certbot:

sudo apt update
sudo apt install python3 python3-venv libaugeas0
sudo python3 -m venv /opt/certbot/
sudo /opt/certbot/bin/pip install --upgrade pip
sudo /opt/certbot/bin/pip install certbot

Then I generate the cert. You'll get a nice text UI from certbot and update your DNS as a verification challenge. Change this to make sure it's two lines, and your domains and subdomains are correct and your paths are correct.

sudo certbot certonly --manual --preferred-challenges=dns --email YOUR@EMAIL.COM   
--server https://acme-v02.api.letsencrypt.org/directory
--agree-tos --manual-public-ip-logging-ok -d "azurefriday.com" -d "*.azurefriday.com"
sudo openssl pkcs12 -export -out AzureFriday2023.pfx
-inkey /etc/letsencrypt/live/azurefriday.com/privkey.pem
-in /etc/letsencrypt/live/azurefriday.com/fullchain.pem

I then copy the resulting file to my desktop (check your desktop path) so it's now in the Windows world.

sudo cp AzureFriday2023.pfx /mnt/c/Users/Scott/OneDrive/Desktop

Now from Windows, import the PFX, note the thumbprint and export that cert.

Import-PfxCertificate -FilePath "AzureFriday2023.pfx" -CertStoreLocation Cert:\LocalMachine\My 
-Password (ConvertTo-SecureString -String 'PASSWORDHERE' -AsPlainText -Force) -Exportable

Export-PfxCertificate -Cert Microsoft.PowerShell.Security\Certificate::LocalMachine\My\597THISISTHETHUMBNAILCF1157B8CEBB7CA1
-FilePath 'AzureFriday2023-fixed.pfx' -Password (ConvertTo-SecureString -String 'PASSWORDHERE' -AsPlainText -Force)

Then upload the cert to the Certificates section of your App Service, under Bring Your Own Cert.

Custom Domains in Azure App Service

Then under Custom Domains, click Update Binding and select the new cert (with the latest expiration date).

image

Next step is to make this even more automatic or select a more automated solution but for now, I'll worry about this in September and it solved my expensive Wildcard Domain issue.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

GitHub Copilot for CLI for PowerShell

April 25, 2023 Comment on this post [6] Posted in AI | PowerShell
Sponsored By

GitHub Next has this cool project that is basically Copilot for the CLI (command line interface). You can sign up for their waitlist at the Copilot for CLI site.

Copilot for CLI provides three shell commands: ??, git? and gh?

This is cool and all, but I use PowerShell. Turns out these ?? commands are just router commands to a larger EXE called github-copilot-cli. So if you go "?? something" you're really going "github-copilot-cli what-the-shell something."

So this means I should be able to to do the same/similar aliases for my PowerShell prompt AND change the injected prompt (look at me I'm a prompt engineer) to add 'use powershell to.'

Now it's not perfect, but hopefully it will make the point to the Copilot CLI team that PowerShell needs love also.

Here are my aliases. Feel free to suggest if these suck. Note the addition of "user powershell to" for the ?? one. I may make a ?? and a p? where one does bash and one does PowerShell. I could also have it use wsl.exe and shell out to bash. Lots of possibilities.

function ?? { 
$TmpFile = New-TemporaryFile
github-copilot-cli what-the-shell ('use powershell to ' + $args) --shellout $TmpFile
if ([System.IO.File]::Exists($TmpFile)) {
$TmpFileContents = Get-Content $TmpFile
if ($TmpFileContents -ne $nill) {
Invoke-Expression $TmpFileContents
Remove-Item $TmpFile
}
}
}

function git? {
$TmpFile = New-TemporaryFile
github-copilot-cli git-assist $args --shellout $TmpFile
if ([System.IO.File]::Exists($TmpFile)) {
$TmpFileContents = Get-Content $TmpFile
if ($TmpFileContents -ne $nill) {
Invoke-Expression $TmpFileContents
Remove-Item $TmpFile
}
}
}
function gh? {
$TmpFile = New-TemporaryFile
github-copilot-cli gh-assist $args --shellout $TmpFile
if ([System.IO.File]::Exists($TmpFile)) {
$TmpFileContents = Get-Content $TmpFile
if ($TmpFileContents -ne $nill) {
Invoke-Expression $TmpFileContents
Remove-Item $TmpFile
}
}
}

It also then offers to run the command. Very smooth.

image

Hope you like it. Lots of fun stuff happening in this space.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Use your own user @ domain for Mastodon discoverability with the WebFinger Protocol without hosting a server

December 19, 2022 Comment on this post [3] Posted in Musings
Sponsored By

Mastodon is a free, open-source social networking service that is decentralized and distributed. It was created in 2016 as an alternative to centralized social media platforms such as Twitter and Facebook.

One of the key features of Mastodon is the use of the WebFinger protocol, which allows users to discover and access information about other users on the Mastodon network. WebFinger is a simple HTTP-based protocol that enables a user to discover information about other users or resources on the internet by using their email address or other identifying information. The WebFinger protocol is important for Mastodon because it enables users to find and follow each other on the network, regardless of where they are hosted.

WebFinger uses a "well known" path structure when calling an domain. You may be familiar with the robots.txt convention. We all just agree that robots.txt will sit at the top path of everyone's domain.

The WebFinger protocol is a simple HTTP-based protocol that enables a user or search to discover information about other users or resources on the internet by using their email address or other identifying information. My is first name at last name .com, so...my personal WebFinger API endpoint is here https://www.hanselman.com/.well-known/webfinger

The idea is that...

  1. A user sends a WebFinger request to a server, using the email address or other identifying information of the user or resource they are trying to discover.

  2. The server looks up the requested information in its database and returns a JSON object containing the information about the user or resource. This JSON object is called a "resource descriptor."

  3. The user's client receives the resource descriptor and displays the information to the user.

The resource descriptor contains various types of information about the user or resource, such as their name, profile picture, and links to their social media accounts or other online resources. It can also include other types of information, such as the user's public key, which can be used to establish a secure connection with the user.

There's a great explainer here as well. From that page:

When someone searches for you on Mastodon, your server will be queried for accounts using an endpoint that looks like this:

GET https://${MASTODON_DOMAIN}/.well-known/webfinger?resource=acct:${MASTODON_USER}@${MASTODON_DOMAIN}

Note that Mastodon user names start with @ so they are @username@someserver.com. Just like twiter would be @shanselman@twitter.com I can be @shanselman@hanselman.com now!

Searching for me with Mastodon

So perhaps https://www.hanselman.com/.well-known/webfinger?resource=acct:FRED@HANSELMAN.COM

Mine returns

{
"subject":"acct:shanselman@hachyderm.io",
"aliases":
[
"https://hachyderm.io/@shanselman",
"https://hachyderm.io/users/shanselman"
],
"links":
[
{
"rel":"http://webfinger.net/rel/profile-page",
"type":"text/html",
"href":"https://hachyderm.io/@shanselman"
},
{
"rel":"self",
"type":"application/activity+json",
"href":"https://hachyderm.io/users/shanselman"
},
{
"rel":"http://ostatus.org/schema/1.0/subscribe",
"template":"https://hachyderm.io/authorize_interaction?uri={uri}"
}
]
}

This file should be returned as a mime type of application/jrd+json

My site is an ASP.NET Razor Pages site, so I just did this in Startup.cs to map that well known URL to a page/route that returns the JSON needed.

services.AddRazorPages().AddRazorPagesOptions(options =>
{
options.Conventions.AddPageRoute("/robotstxt", "/Robots.Txt"); //i did this before, not needed
options.Conventions.AddPageRoute("/webfinger", "/.well-known/webfinger");
options.Conventions.AddPageRoute("/webfinger", "/.well-known/webfinger/{val?}");
});

then I made a webfinger.cshtml like this. Note I have to double escape the @@ sites because it's Razor.

@page
@{
Layout = null;
this.Response.ContentType = "application/jrd+json";
}
{
"subject":"acct:shanselman@hachyderm.io",
"aliases":
[
"https://hachyderm.io/@@shanselman",
"https://hachyderm.io/users/shanselman"
],
"links":
[
{
"rel":"http://webfinger.net/rel/profile-page",
"type":"text/html",
"href":"https://hachyderm.io/@@shanselman"
},
{
"rel":"self",
"type":"application/activity+json",
"href":"https://hachyderm.io/users/shanselman"
},
{
"rel":"http://ostatus.org/schema/1.0/subscribe",
"template":"https://hachyderm.io/authorize_interaction?uri={uri}"
}
]
}

This is a static response, but if I was hosting pages for more than one person I'd want to take in the url with the user's name, and then map it to their aliases and return those correctly.

Even easier, you can just use the JSON file of your own Mastodon server's webfinger response and SAVE IT as a static json file and copy it to your own server!

As long as your server returns the right JSON from that well known URL then it'll work.

So this is my template https://hachyderm.io/.well-known/webfinger?resource=acct:shanselman@hachyderm.io from where I'm hosted now.

If you want to get started with Mastodon, start here. https://github.com/joyeusenoelle/GuideToMastodon/ it feels like Twitter circa 2007 except it's not owned by anyone and is based on web standards like ActivityPub.

Hope this helps!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.