Getting started with Azure Sphere: Part 2 (integration with Azure services)

This content is 4 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, I wrote about my experiences getting some sample code running on an Avnet Azure Sphere Starter Kit. That first post walked through installing the SDK, setting up my development environment (I chose to use Visual Studio Code), configuring the device (including creating a tenant, claiming the device, connecting the device to Wi-Fi, and updating the OS), and downloading and deploying a sample app.

Since then, I’ve managed to make some steps forward with the Element 14 out of the box demo by Brian Willess (part 1, part 2 and part 3). Rather than repeat Brian’s posts, I’ll focus on what I did to work around a few challenges along the way.

Working around compiler errors in Visual Studio Code using the command line

My first issue was that the Element 14 blogs are based on Visual Studio – not Visual Studio Code and I was experiencing issues where Code would complain it couldn’t find a compiler.

Thanks my colleague Andrew Hawker who was also experimenting with his Starter Kit, but using a Linux VM, I had a workaround. That workaround was to run CMake and Ninja from the command line, then to sideload the resulting app package onto the device from the Azure Sphere Developer Command Prompt:

cmake ^
-G "Ninja" ^
-DCMAKE_TOOLCHAIN_FILE="C:\Program Files (x86)\Microsoft Azure Sphere SDK\CMakeFiles\AzureSphereToolchain.cmake" ^
-DAZURE_SPHERE_TARGET_API_SET="4" ^
-DAZURE_SPHERE_TARGET_HARDWARE_DEFINITION_DIRECTORY="C:\Users\%username%\AzureSphereHacksterTTC\Hardware\avnet_mt3620_sk" ^
-DAZURE_SPHERE_TARGET_HARDWARE_DEFINITION="avnet_mt3620_sk.json" ^
--no-warn-unused-cli ^
-DCMAKE_BUILD_TYPE="Debug" ^
-DCMAKE_MAKE_PROGRAM="ninja.exe" ^
"C:\Users\%username%\AzureSphereHacksterTTC\AvnetStarterKitReferenceDesign"
ninja
azsphere device sideload deploy --imagepackage AvnetStarterKitReferenceDesign.imagepackage

I wasn’t able to view the debug output (despite my efforts to use PuTTY to read 192.168.35.2:2342) but I was confident that the app was working on the device so moved on to integrating with cloud services.

Brian Willess has since updated the repo so it should now work with Visual Studio Code (at least for the high level application) and I have successfully tested the non-connected scenario (part 1) with the changes.

Integration with Azure IoT Hub, device twins and Azure Time Series Insights

Part 2 of the series of posts I was working though is where the integration starts. The basic steps (refer to Brian Willess’ post for full details) were:

  1. Create an Azure IoT hub, which is a cloud-hosted back-end for secure communication with Internet of Things (IoT) devices, of which the Azure Sphere is just one of many options.
  2. Create and configure the IoT Hub Device Provisioning Service (DPS), including:
    • Downloading a certificate from the Azure Sphere tenant (using azsphere tenant download-CA-certificate --output CAcertificate.cer at the Azure Sphere Developer Command Prompt) and using this to authenticate with the DPS, including validation with the verification code generated by the Azure portal (azsphere tenant download-validation-certificate --output validation.cer --verificationcode verificationcode) and uploading the resulting certificate to the portal.
    • Creating an Enrollment Group, to enrol any newly-claimed device whose certificate is signed by my tenant. This stage also includes the creation of an initial device twin state, editing the JSON to include some extra lines:
      "userLedRed": false,
      "userLedGreen": false,
      "userLedBlue": true
    • The initial blue illumination of the LED means that we can see when the Azure Sphere has successfully connected to the IoT Hub.
  3. Edit the application source code (I used Visual Studio Code but any editor will do) to:
    • Uncomment #define IOT_HUB_APPLICATION in build_options.h.
    • Update the CmdArgs line in app_manifest.json with the ID Scope from the DPS Overview in the Azure portal.
    • Update the AllowedConnections line in app_manifest.json with the FQDNs from the DPS Overview (Global Device Endpoint) and the IoT Hub (Hostname) in the Azure portal.
    • Update the DeviceAuthentication line in app_manifest.json with the Azure Sphere tenant ID (which may be obtained using azsphere tenant show-selected at the Azure Sphere Developer Command Prompt).
  4. Build and run the app. I used the CLI as detailed above, but this should now be possible within Visual Studio Code.
  5. Use the device twin capabilities to manipulate the device, for example turning LEDs on/off (though clearly there are more complex scenarios that could be used in real deployments!).
  6. Create a Time Series Insights resource in Azure, which is an analytics solution to turn IoT data into actionable insights.
    • Create the Time Series Insights environment using the existing IoT Hub with an access policy of iothubowner and consumer group of $Default.
  7. Add events inside the Time Series Insights to view the sensor readings from the Azure Sphere device.
Time Series Insights showing sensor data from an Azure Sphere device.

Time Series Insights can get expensive for a simple test project without any real value. I could quickly have used my entire month’s Azure credits, so I deleted the resource group used to contain my Azure Sphere resources before moving on to the next section…

Integration with Azure IoT Central

Azure IoT Central is a hosted IoT platform. It is intended to take away much of the underlying complexity and let organisations quickly build IoT solutions using just a web interface.

Following part 3 in Brian Willess’ Azure Sphere series, I was able to get my device working with IoT Central, both using the web interface to control the LEDs on the board and also pushing sensor data to a dashboard. As before, these are the basic steps and refer to Brian Willess’ post for full details:

  1. Create a new IoT Central application.
  2. Select or create a template:
    • Use the IoT device custom template.
    • Either import an existing capability model (this was mine) or create one, adding interfaces (sensors, buttons, information, etc.) and capabilities.
    • Create custom views – e.g. for LED device control or for device metrics.
  3. Publish the template.
  4. Configure DPS:
    • Download a certificate from the Azure Sphere tenant using azsphere tenant download-CA-certificate --output CAcertificate.cer at the Azure Sphere Developer Command Prompt. (This is the same certificate already generated for the IoT Hub example.)
    • Upload the certificate to IoT Central and generate a validation code, then use azsphere tenant download-validation-certificate --output validation.cer --verificationcode verificationcode to apply this.
    • Upload the new validation certificate.
  5. Create a non-simulated device in IoT Central:
  6. Run ShowIoTCentralConfig.exe, providing the ID Scope and a shared access signature key for the device (both obtained from the Device Connection details in IoT Central) and the Device ID (from the device created in the previous step). Make a note of details provided by the tool
  7. Configure the application source code to connect to IoT Central:
    • Uncomment #define IOT_CENTRAL_APPLICATION in build_options.h.
    • Update the CmdArgs line in app_manifest.json with the ID Scope obtained from the Device Connection details in IoT Central.
    • Update the AllowedConnections line in app_manifest.json with the FQDNs obtained by running ShowIoTCentralConfig.exe.
    • Update the DeviceAuthentication line in app_manifest.json with the Azure Sphere tenant ID (which may be obtained using azsphere tenant show-selected at the Azure Sphere Developer Command Prompt).
  8. Build and run the application.
  9. Associate the Azure Sphere device with IoT Central (the device created previously was just a “dummy” to get some configuration details). IoT Central should have found the real device but it will need to be “migrated” to the appropriate device group to pick up the template created earlier.
  10. Open the device and enjoy the data!

I hadn’t expected IoT Central to cost much (if anything, because the first two devices are free) but I think the app I’m using is pretty chatty so I’m being charged for extra messages (30,000 a month sounds like a lot until you realise it’s only around 40 an hour on a device that’s sending frequent updates to/from the service). It seems to be costing just under £1/day (from a pool of credits) so I won’t be worrying too much!

What’s next for my Azure Sphere device?

Having used Brian Willess’ posts at Element 14 to get an idea of how this should work, I think my next step is to buy some external sensors and write some real code to monitor something real… unfortunately the sensors I want are on back order until the summer but watch this space!

Getting started with Azure Sphere: Part 1 (setup and running a sample app)

This content is 4 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Late in 2019, I got my hands on an Azure Sphere Starter Kit, which I’ve been intending to use for an IoT project, using some of the on-board sensors for temperature and potentially an external one for humidity…

For those who aren’t familiar with Azure Sphere, it’s Microsoft’s Secure Internet of Things (IoT) solution using certified chips, a custom operating system and a security service. My device is an Avnet Azure Sphere MT3620 Starter Kit and this blog post focuses on getting it up and running with one of the sample applications that Microsoft provides, using Windows 10 (other options include Linux).

Installing Visual Studio Code and the Azure Sphere SDK

Having obtained the kit, the next stop was Microsoft’s Getting Started with Azure Sphere page. I downloaded and installed Visual Studio Code (I don’t really need the whole Visual Studio 2019 application – though I later found that a lot of the advice on the Internet assumes that’s what you’re using…) and then immediately found that there are two versions of the Azure Sphere Software Development Kit (SDK). According to the Microsoft docs, either can be used with Visual Studio Code but I found the setup for the Azure Sphere SDK for Visual Studio failed when it can’t find Visual Studio (not really surprising) and so I used the Azure Sphere SDK for Windows.

Connecting the hardware

I plugged in the Avnet Azure Sphere Starter Kit, using the supplied USB cable, and watched as Windows installed drivers after which a virtual network interface was present and three COM ports appeared in Device Manager.

Setting up my dev environment

Installing Visual Studio Code and the Azure Sphere SDK was only the first part of getting ready to create code for the device. I needed to install the Azure Sphere extension (easily found in the Extensions Marketplace):

The Azure Sphere extension also installs two dependencies:

  • C/C++
  • CMake Tools

I also need to install CMake (in my case it was version 3.17.1). Not really knowing what I was doing, I followed the defaults but on reflection, I probably should have let CMake add its directory to the system %PATH% variable (I later uninstalled and reinstalled CMake to do this, but could just have added C:\Program Files\CMake\bin to the Path in the user environment variables).

The final installation was Ninja. Windows Defender SmartScreen blocked this app, but I was later able to work around that, by unblocking in the properties for ninja.exe:

I missed the point in the Microsoft documentation that said I needed to manually add Ninja to the %PATH% environment variable but I later went back and added the folder that I copied ninja.exe to (which, for me, was C:\Users\%username%\Tools).

(The above steps were my second attempt – the first time I installed MinGW-W64 to work around issues when Visual Studio Code couldn’t find a compiler, together with several changes in settings.json. I later removed all of that and managed to compile and deploy a sample application using just the settings above…)

Configuring the Azure Sphere device for use

There are a few steps required to configure the device for use. These are all completed using the Azure Sphere Developer Command Prompt, which was installed earlier, with the SDK.

Creating an Azure Sphere tenant and claiming the device

Each Azure Sphere device must be “claimed” and associated with a “tenant”. I followed the Microsoft documentation to do this…

azsphere login --newuser user@domain.tld

After completing Multi-Factor Authentication (MFA) and confirming I wanted to allow Azure Sphere to use my account, I was logged in but with a warning that I don’t have access to any Azure Sphere tenants, so I created one:

azsphere tenant create --name "Mark Wilson"

Warning – more research required: I used a Microsoft Account, as per the Microsoft instructions, but am now concerned I should have used an Azure Active Directory (Organisational/Work or School) account (especially as Role Based Access Control is supported from Azure Sphere 19.10 onwards). As a device can only be claimed once and, once claimed, the device is permanently associated with the Azure Sphere tenant, I’m stuck with these settings now…

I then went ahead and claimed the device:

azsphere device claim

Connecting to Wi-Fi and updating the device operating system

I checked the current OS version on the device:

azsphere device show-deployment-status

As can be seen, not only is the OS out of date, but the device is not connected to a network, so I connected to Wi-Fi:

azsphere device wifi show-status
azsphere device wifi add --ssid "SSID" --psk password
azsphere device wifi show-status

Now, with network connectivity in place, the device had a fighting chance of an OS update and according to the Microsoft documentation:

The Azure Sphere device checks for Azure Sphere OS and application updates each time it boots, when it initially connects to the internet, and at 24-hour intervals thereafter. If updates are available, download and installation could take as much as 15-20 minutes and might cause the device to restart.

Configure networking and update the device OS

I tried several restarts using azsphere device restart with no success. In the end, I left the device connected overnight and, by the morning, it had updated to 20.03.

Finally, I enabled application development on the device, ready to download some code and deploy an application:

azure sphere device enable-development

Downloading a sample app

My initial attempts to use the app that I wanted to didn’t work so I decided to test my setup with one of the Microsoft Quick Starts.

I needed to use git to clone the Azure Sphere Samples Repo, so that meant installing git. Then, from the Terminal in Visual Studio Code, I ran git clone https://github.com/Azure/azure-sphere-samples.git.

I then opened the Samples\HelloWorld\HelloWorld_HighLevelApp folder in Visual Studio Code, ready to build and deploy the app.

Building and deploying the app

Having set up my dev environment, set up the device and downloaded some sample code, I followed the instructions in the Visual Studio Code Azure Sphere Extension to run the following in the Command Palette: Azure Sphere: Configure Settings (selecting High-Level Application) and CMake: Build.

I was then able to build and deploy the sample app to my Azure Sphere device, by starting a debug session (F5) .

and was rewarded with a blinking LED on the board!

Azure Sphere Starter Kit with blinking LED

I can also view the application status with azsphere device app show-status.

Next steps

The next step is to get the app I really wanted to use working on the device, making use of some of the on-board sensors and then integrating this with some of the Azure services. I’m having trouble compiling that code at the moment, so that blog post may be a while longer…

Further reading

Publishing to GitHub from Visual Studio

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Those who follow me on Twitter (@markwilsonit) may be aware that I’m attempting to learn some C# skills in my spare time (what little of that I have).  It’s not my first foray into coding – I have a Computer Studies degree and, in my youth I wrote code in a variety of languages (BASIC, C, Turbo Pascal, Modula-2, 68000 assembler, COBOL, Visual Basic, C++ and probably some others too) but aside from a little bit of C++ on the Arduino and the odd bit of PowerShell, I haven’t done much in the last 20 years.  As my career moves further towards management I’m increasingly convinced it’s technology I enjoy though – and I’m seriously considering a move from infrastructure to software…

Anyway…

I’ve been following a Pluralsight C# course from Scott Allen (@ode2code) and, part way through, Scott uses System.Speech to demonstrate adding references to assemblies.  I had a play, adapting something I’d written earlier to talk to me as well as output to the console – nothing grand – just a bit of fun.  After showing it to my sons, the eldest (who is 9), described it as “epic” (which I understand is pretty good) and I tweeted, only to be amused by a reply which suggested the same library had caused hilarity in Duncan Smart (@DuncanSmart)’s household:

Small world, eh!

I followed the link to Duncan’s code and thought “Hmm… GitHub… I use that for my Arduino code… I wonder if…” – and yes, Visual Studio can publish to GitHub too.  It took some work to suss it out though, so here’s what I did (following advice on StackOverflow)…

  1. In Visual Studio, select File then Add to Source Control (which creates a local Git repository)
  2. On GitHub, create a new reposotory (but don’t initialise it with a README – Visual Studio wants an empty repository.
  3. Copy the HTTPS URI for the new repo, then go back to Visual Studio, open Team Explorer, select Home then Unsynced Commits and enter the GitHub URL before clicking Publish.
  4. You may find that you have to commit the changes locally first, which in my case required creating a local username and supplying an email address.
  5. After committing, the solution should be visible on GitHub.

For reference, I was using Visual Studio Express 2013 for Windows Desktop.

A quick look at Lab Management in Visual Studio Team System 2010

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks ago I referred to Microsoft’s announcement of Visual Studio 2010 Lab Management, asking if this was Microsoft’s answer to VMware Stage Manager and the answer is… sort of.

I don’t know a huge amount about Stage Manager but the basic premise is that it targets release management by placing virtual machine images into a configuration (a service) and then promoting or demoting configurations between environments based on the rights assigned to a user. Images can also be archived or cloned to create a copy for further testing.

Microsoft’s approach is subtly different – as should be expected with a product that’s part of Visual Studio it’s focused on aiding developers to avoid configuration drift and to perform repetitive system tests during the application development lifecycle, leaving the System Center management products to manage the movement of virtual machines between environments in the virtual infrastructure.

The VSTS approach attempts to address a number of fundamental issues:

  • Reproduction of bugs. It’s a common scenario – a tester files a bug but the developer is unable to reproduce it so, after a few rounds of bugfix ping-pong, the incident is closed with a norepo status, resulting in poor morale on both sides.  Lab Management allows the definition of test cases for manual testing (marking steps as passes/fails) and including an action log of all steps performed by the tester.  When an error occurs, the environment state can be checkpointed (including the memory, registry, operating system and software state), allowing for reproduction of the issue.  A system of collectors is used to gather diagnostic data, and, with various methods provided for recording tests (as a video, a checkpoint, an action log or an event log), it’s possible to automate the bug management and tracking process including details, system information, test cases and links to logs/checkpoints – all information is provided to the developer within the Visual Studio interface and the developer has access to the tester’s environment. In addition, because each environment is made up of a number of virtual machines – rather than running all application tiers on a single box – so-called “double-hop” issues are avoided whereby the application works on one box but issues appear when it’s scaled out. In short: Lab Management improves quality.
  • Environment setup. Setting up test environments is complex but, using Lab Management, it’s possible for a developer to use a self-service portal to rapidly create an new environment (from a template (not just a VM, but the many interacting roles which make up that environment – a group of virtual machines with an identity – for example an n-tier web application).  These environments may be copied, shared or checkpointed.  The Lab Environment Viewer allows the developer to view the various VM consoles in a single window (avoiding multiple Remote Desktop Connection instances) as well as providing access to checkpoints, allowing developers to switch between different versions of an environment and, because multiple environment checkpoints use the same IP address schema, supporting network fencing.  In short: Lab Management improves productivity.
  • Building often and releasing early.  Setting up daily builds is complex and Lab Management’s ability to provide clean environments is an important tool in the application development team’s arsenal.  Using VSTS, a developer can define builds including triggers (e.g. date and time, number of check-ins) and processes (input parameters, environment details, scripts, checkpoints, unit tests to run, etc.).  The traditional build cycle of develop/compile, deploy, run tests becomes develop/compile, restore environment, deploy, take checkpoint, run tests – significantly improving flexibility and reducing setup times. In short: Lab Management improves agility.

From an infrastructure perspective, Lab Management is implemented as a new role in Visual Studio Team System (VSTS), which itself is built on Team Foundation Server (TFS)Lab Management sits alongside Test Case Management (also new in Visual Studio 2010 – codenamed Camano), Build Management, Work Item Tracking and Source Control.

Vishal Mehotra, a Senior Lead Program Manager working on VSTS Lab Management in Microsoft’s India Development Center, explained to me that in addition to TFS, System Center Virtual Machine Manager (SCVMM) is required to provide the virtual machine management capabilities (effectively VSTS Lab Management provides an abstraction layer on the environment using SCVMM). Whilst it’s obviously Microsoft’s intention that the virtualisation platform will be Hyper-V, because SCVMM 2008 can manage VMware Virtual Center, it could also be VMware ESX.  The use of enterprise virtualisation technologies means that the Lab Management environments are scalable and the VMs may be moved between environments when defining templates (e.g. to take an existing VM and move it from UAT into production, etc.).  In addition, System Center Operations Managers adds further capabilities to the stack.

Whilst the final product is some way off and the marketing is not finalised, it seems likely that Lab Management will be a separate SKU (including the System Center prerequisites).  If you’re looking to get your hands on it right now though you may be out of luck – unfortunately Lab Management is not part of the current CTP build for Visual Studio 2010 and .NET Framework 4.0.

This post really just scrapes the surface (and, as I’m not a developer, that’s about as far as I can take it).  To find out more, read about Lab Management in VSTS 2010 over at Somasegar’s Weblog or check out the video of the PDC 2008 session on improving code quality with VSTS 2010 Lab Management, presented by Principal Program Manager, Ram Cherala.

PC, phone and web: How Microsoft plans to build the next generation of user experiences

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Channel 9 man watching PDC onlineI’m supposed to be taking a week off work, but the announcements coming out of Microsoft’s PDC have the potential to make a huge impact on the way that we experience our IT. So, it’s day 2 of PDC and I’ve spent the afternoon and evening watching the keynote and blogging about new developments in Windows…

Yesterday I wrote about Ray Ozzie’s PDC keynote during which the Windows Azure services platform was announced. Today, he was back on stage – this time with a team of Microsoft executives talking about the client platform, operating system and application innovations that provide the front end user experience in Microsoft’s vision of the future of personal computing. And, throughout the presentation, there was one phrase that kept on coming back:

PC, phone and web.

Over the years, PCs have changed a lot but the fundamental features have been flexibility, resilience and adaptability to changing needs. Now the PC is adapting again for the web-centred era.

Right now, the ‘net and a PC are still two worlds – we’ve barely scratched the surface of how to take the most value of the web and the personal computer combined.

PC, phone, and web.

Ozzie spoke of the most fundamental PC advantage being the fact that the operating system and applications are right next to the hardware – allowing the user experience to take advantage of multiple, high resolution, screens, voice, touch, drag and drop (to combine applications), storage (for confidentiallity, mobility, and speed of access) so that users may richly create, consume, interact with, and edit information. The PC is a personal information management device.

The power of the web is its global reach – using the ‘net we can communicate with anyone, anywhere and the Internet is “every company’s front door” – a common meeting place. The unique value of the web is the ability to assemble the world’s people, organisation, services and devices – so that we can communicate, transact and share.

Like PCs, phone software is close to the hardware and it has full access to the capabilities of each device – but with the unique advantage is that it’s always with the user – and it knows where they are (location) and at what time – providing spontaneity for capture and delivery of information.

Microsoft’s vision includes applications that spans devices in a seamless experience – harnessing the power of all three access methods.

PC, phone and web.

“We need these platforms to work together and yet we also want access to the full power and capabilities of each”

[Ray Ozzie, Chief Software Architect, Microsoft Corporation]

I won’t cover all of the detail of the 2-and-a-half hour presentation here, but the following highlights cover the main points from the keynote.

Steven Sinofsky, Senior Vice President for Microsoft’s Windows and Windows Live Engineering Group spoke about how Windows 7 and Server 2008 R2 share the same kernel but today’s focus is on the client product:

  • Sinofsky brought Julie Larson-Green, Corporate Vice President, Windows Experience on stage to show off the new features in Windows 7. Windows 7 is worth a blog post (or few) of its own, but the highlights were:
    • User interface enhancements, including new taskbar functionality and access to the ribbon interface for developer.
    • Jump lists (menus on right click) from multiple locations in the user interface.
    • Libraries which allow for searching across multiple computers).
    • Touch capabilities – for all applications through mouse driver translation, but enhanced for touch-aware applications with gestures and a touch-screen keyboard.
    • DirectX – harnessing the power of modern graphics hardware and providing an API for access, not just to games but also to 2D graphics, animation and fine text.
    • And, of course, the fundamentals – security, reliability, compatibility and performance.
  • Windows Update, music metadata, online help are all service-based. Windows 7 makes use of Microsoft’s services platform with Internet Explorer 8 to access the web. Using technologies such as those provided by Windows Live Essentials (an optional download with support for Windows Live or third party services via standard protocols), Microsoft plans to expand the PC experience to the Internet with software plus services.

PC, phone and web.

“We certainly got a lot of feedback about Windows Vista at RTM!”

[Steven Sinofsky, Senior Vice President, Microsoft Corporation]

  • Sinofsky spoke of the key lessons from the Windows Vista experience, outlining key lessons learned as:
    • Readiness of ecosystem – vendor support, etc. Vista changed a lot of things and Windows 7 uses the same kernel as Windows Vista and Server 2008 so there are no ecosystem changes.
    • Standards support – e.g. the need for Internet Explorer to fully support web standards and support for OpenXML documents in Windows applets.
    • Compatibilty – Vista may be more secure but UAC has not been without its challenges.
    • Scenarios – end to end experience – working with partners, hardware and software to provide scenarios for technology to add value.
  • Today, Microsoft is releasing a pre-beta milestone build of Windows 7, milestone 3, which is not yet feature complete.
  • In early 2009, a feature complete beta will ship (to a broader audience) but it will still not be ready to benchmark. It will incorporate a feedback tool which will package the context of what is happening along with feedback alongside the opt-in customer experience improvement program which provides additional, anonymous, telemetry to Microsoft.
  • There will also be a release candidate before final product release and, officially, Microsoft has no information yet about availability but Sinofsky did say that 3 years from the general availability of Windows Vista will be around about the right time.

Next up was Scott Guthrie, Corporate Vice President for Microsoft’s .NET Developer Division who explained that:

  • Windows 7 will support .NET or Win32 client development with new tools including new APIs, updated foundation class library and Visual Studio 2010.
  • Microsoft .NET Framework (.NET FX) 3.5 SP1 is built in to Windows 7, including many performance enhancements and improved 3D graphics.
  • A new Windows Presentation Framework (WPF) toolkit for the .NET FX 3.5 SP1 was released today for all versions of Windows.
  • .NET FX 4 will be the next version of the framework with WPF improvements and improved fundamentals, including the ability to load multiple common language runtime versions inside the same application.
  • Visual Studio 2010 is built on WPF – more than just graphics but improvements to the development environment too and an early CTP will be released to PDC attendees this week.
    In a demonstration, Tesco and Conchango demonstrated a WPF client application for tesco.com aiming to save us money (every little helps) but to spend more of it with Tesco! This application features a Tesco at home gadget with a to do list, delivery and special offer information and providing access to a “corkboard”. The corkboard is the hub of familiy life, with meal planning, calendar integration, the ability to add ingredients to the basket, recipes (including adjusting quantities) and, calorie counts. In addition, the application includes a 3D product wall to find an item among 30,000 products, look at the detail and organise products into lists, and the demonstration culminated with Conchango’s Paul Dawson scanning a product barcode to add it to the shopping list.
  • Windows 7 also includes Internet Explorer 8 and ASP.NET improvements for web developers. In addition, Microsoft claims that Silverlight is now on 1 in 4 machines connected to the Internet, allowing for .NET applications to run inside the browser.
  • Microsoft also announced the Silverlight toolkit with additional controls on features from WPF for Silverlight 2 as a free of charge toolkit and Visual Studio 2010 will include a Silverlight designer.

David Treadwell, Corporate Vice President, Live Platform Services spoke about how the Live Services component within Windows Azure creates a bridge to connect applications, across devices:

PC, phone and web.

  • The core services are focused around identity (e.g. Live ID as an openID provider), directory (e.g. the Microsoft services connector and federation gateway), communications and presence (e.g. the ability to enhance websites with IM functionality) and search and geospacial capabilities.
  • These services may be easily integrated using standards based protocols – not just on a Microsoft .NET platform but invoke from any application stack.
  • Microsoft has 460 million Live Services users who account for 11% of total Internet minutes and the supporting infrastructure includes 100,000s of servers worldwide.
  • We still have islands of computing resources and Live Mesh bridges these islands with a core synchronisation concept but Mesh is just the tip of the iceberg and is now a key component of Live Services to allow apps and websites to connect users, devices, applications and to provide data synchronisation.
  • The Live Service Framework provides access to Live Services, including a Live operating environment and programming model.
  • Ori Amiga, Group Program Manager – demonstrated using Live Framework to extend an application to find data on multiple devices, with contact integration for sharing. Changes to the object and its metadata were synchronised and reflected on another machie without any user action and a mobile device was used to added data to the mesh, which sychronised with other devices and with shared contacts.
  • Anthony Rhodes, Head of Online Media for BBC iPlayer (which, at its peak, accounts for 10% of the UK’s entire Internet bandwidth) spoke of how iPlayer is moving from an Internet catchup (broadcast 1.0) service to a model where the Internet replaces television (broadcast 2.0) using Live Mesh with a local Silverlight application. Inventing a new word (“meshified”), Rhodes explained how users can share content between one another and across devices (e.g. watch a program on the way to work, resuming playing from where it left off on the computer).

In the final segment, before Ray Ozzie returned to the stage, Takeshi Numoto, General Manager for the Office Client spoke of how Microsoft Office should be about working the way that users want to:

  • Numoto announced Office web applications for Word, Excel, OneNote and PowerPoint as part of Office 14 and introduced the Office Live Workspace, built on Live Services to allow collaboration on documents.
  • In a demonstration, a document was edited without locks or read only access – each version of the document was synchronised and included presence for collaborators to reach out using e-mail, instant messaging or a phone call. Office web applications work in Internet Explorer, Firefox or Safari and are enhanced with Silverlight. Changes are reflected in each collaborator’s view but data may also be published to websites (e.g. a Windows Live Spaces blog) using REST APIs so that as the data changes, so does the published document, extending office documents onto the web.
  • Office Web apps are just a part of Office 14 and more details will be released as Office 14 is developed.
  • Numoto summarised his segment by highlighting that the future of productivity is diversity in the way that people work – bringing people and data together in a great collaboration experience which spans…

PC, phone and web.

  • In effect, software plus services extends Office into connected productivity. In a direct reference to Google Apps, Microsoft’s aspirations are about more than just docs and speadsheets in a browser accessed over the web but combine to create an integrated solution which provides more value – with creation on the PC, sharing and collaboration on the web and placing information within arms reach on the phone. Seamless connected productivity – an Office across platform boundaries – an office without walls.

PC, phone and web.

Windows vs. Walls
Software plus services is about combining the best of Windows and the best of the web. Windows and Windows Live together in a seamless experience – a Windows without walls. All of this is real – but, as Ray Ozzie explained, it’s also nascent – this is really just the beginning of Microsoft’s future computing platform and, based on what Microsoft spoke of in yesterday and today’s PDC keynotes, the company is investing heaviliy in and innovating on the Windows platform. Google may have been one to watch lately but it would be foolish to write off Windows just yet – Microsoft’s brave new world is enormous.

Incorrect side-by-side configuration caused by missing runtime libraries

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Just before the weekend, I was trying to run an application on a 64-bit installation of Windows Server 2008 and was presented with a strange error:

This application has failed to start because its side-by-side configuration is incorrect. Please see the application event log for more details.

I know that side-by-side is something to do with avoiding DLL hell (by not dumping all the DLLs in the same folder with the consequences of one application overwriting another’s libraries) but I didn’t have a clue how to fix it and the application event log didn’t help much:

Log Name: Application
Source: SideBySide
Date: 15/08/2008 18:00:10
Event ID: 33
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer:
computername.domainname.tld
Description:
Activation context generation failed for “C:\
foldername\applicationname.exe”. Dependent Assembly Microsoft.VC90.CRT,processorArchitecture=”x86″,publicKeyToken=”1fc8b3b9a1e18e3b”,type=”win32″,version=”9.0.21022.8″ could not be found. Please use sxstrace.exe for detailed diagnosis.

Thankfully, Junfeng Zhang wrote a comprehensive blog post about diagnosing side by side failures. It’s a bit too developery for me but I did at least manage to follow the instructions to produce myself a sxstrace:

=================
Begin Activation Context Generation.
Input Parameter:
        Flags = 0
        ProcessorArchitecture = AMD64
        CultureFallBacks = en-US;en
        ManifestPath = C:\foldername\applicationname.exe
        AssemblyDirectory = C:\foldername\
        Application Config File =
-----------------
INFO: Parsing Manifest File C:\foldername\applicationname.exe.
        INFO: Manifest Definition Identity is (null).
        INFO: Reference: Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8"
INFO: Resolving reference Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8".
        INFO: Resolving reference for ProcessorArchitecture x86.
                INFO: Resolving reference for culture Neutral.
                        INFO: Applying Binding Policy.
                                INFO: No publisher policy found.
                                INFO: No binding policy redirect found.
                        INFO: Begin assembly probing.
                                INFO: Did not find the assembly in WinSxS.
                                INFO: Attempt to probe manifest at C:\Windows\assembly\GAC_32\Microsoft.VC90.CRT\9.0.21022.8__1fc8b3b9a1e18e3b\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT.MANIFEST.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT\Microsoft.VC90.CRT.MANIFEST.
                                INFO: Did not find manifest for culture Neutral.
                        INFO: End assembly probing.
        ERROR: Cannot resolve reference Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8".
ERROR: Activation Context generation failed.
End Activation Context Generation.

=================
Begin Activation Context Generation.
Input Parameter:
        Flags = 0
        ProcessorArchitecture = Wow32
        CultureFallBacks = en-US;en
        ManifestPath = C:\foldername\applicationname.exe
        AssemblyDirectory = C:\foldername\
        Application Config File =
-----------------
INFO: Parsing Manifest File C:\foldername\applicationname.exe.
        INFO: Manifest Definition Identity is (null).
        INFO: Reference: Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8"
INFO: Resolving reference Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8".
        INFO: Resolving reference for ProcessorArchitecture WOW64.
                INFO: Resolving reference for culture Neutral.
                        INFO: Applying Binding Policy.
                                INFO: No publisher policy found.
                                INFO: No binding policy redirect found.
                        INFO: Begin assembly probing.
                                INFO: Did not find the assembly in WinSxS.
                                INFO: Attempt to probe manifest at C:\Windows\assembly\GAC_32\Microsoft.VC90.CRT\9.0.21022.8__1fc8b3b9a1e18e3b\Microsoft.VC90.CRT.DLL.
                                INFO: Did not find manifest for culture Neutral.
                        INFO: End assembly probing.
        INFO: Resolving reference for ProcessorArchitecture x86.
                INFO: Resolving reference for culture Neutral.
                        INFO: Applying Binding Policy.
                                INFO: No publisher policy found.
                                INFO: No binding policy redirect found.
                        INFO: Begin assembly probing.
                                INFO: Did not find the assembly in WinSxS.
                                INFO: Attempt to probe manifest at C:\Windows\assembly\GAC_32\Microsoft.VC90.CRT\9.0.21022.8__1fc8b3b9a1e18e3b\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT.MANIFEST.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT\Microsoft.VC90.CRT.DLL.
                                INFO: Attempt to probe manifest at C:\foldername\Microsoft.VC90.CRT\Microsoft.VC90.CRT.MANIFEST.
                                INFO: Did not find manifest for culture Neutral.
                        INFO: End assembly probing.
        ERROR: Cannot resolve reference Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8".
ERROR: Activation Context generation failed.
End Activation Context Generation.

I don’t understand most of that trace but I can see that it’s trying to find a bunch of resources named Microsoft.VC90.CRT.* and a search of my system suggests they are missing. Microsoft VC sounds like Visual C++ and v9 would be Visual Studio 2008. Checking back at the original developer’s website, I saw that he suggested to someone else experiencing problems that they might need the Microsoft Visual C++ 2008 redistributable package. I thought that the whole point of having the Microsoft .NET Framework on my PC was so that .NET applications would run, regardless of the language they were developed in (if there are any developers reading this, please feel free to leave a comment on this because I’m out of my depth at this point) but I downloaded the latest x64 version and installed it on my system.

No change (same error).

I realised that I was using the latest (SP1) version (v9.0.30729.17) and perhaps I needed the original one (v9.0.21022) as that’s the version number in the systrace log. So I removed the SP1 version and installed the original redistributable package instead.

Still no change.

I had the C++ source code, so I considered recompiling the application but I found that there was no compiler on my system (unlike for C#) and so I needed to install one of the Visual Studio Express Editions and would take a while. So I thought about other options.

It turned out that, even though I was running on 64-bit Windows, I needed to install a 32-bit redistributable. Don’t ask me why (that’s another developer question – the references to GAC_32 and Win32 in the sxstrace probably provide a clue) but it worked – and it didn’t matter whether I used the original or the SP1 version of the Microsoft Visual C++ 2008 redistributable package (so I used SP1).

Now the application runs as expected. It’s got me thinking though… I really should learn something about .NET development!

No more heroes {please}

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

That’s it.  A single reference to [IT] heroes.  No more – because I didn’t count how many times that word was used at the 2008 Global Launch event today but I certainly didn’t have enough fingers and toes to keep a tally – and now I’m tired of hearing it.

Although those of us at the UK launch had already heard from a variety of Microsoft executives (including Microsoft UK Managing Director, Gordon Frazer, and Microsoft’s General Manager for the Server and Tools Division, Larry Orecklin) and customers, the highlight was the satellite link-up to the US launch event with Microsoft CEO, Steve Ballmer.Steve Ballmer at the Microsoft 2008 Global Launch  Unfortunately, before we got to hear the big man speak, we had to listen to the warm-up act – Tom Brokaw, who it would seem is a well-known television presenter in the States, but totally unknown over here.  He waffled on for a few minutes with the basic premise being that we are in a transformational age in the history of our world and that the definition of our time and generation comes from unsung heroes (damn, that’s the second time I’ve used the word) – not celebrities.

So.  Windows Server 2008, Visual Studio 2008, SQL Server 2008.  Three new products – one released last year, one earlier this month, and another due later in 2008 in Microsoft’s largest ever launch event with 275,000 people expected to attend events across the globe and another million online at the virtual launch experience website.  Ballmer described them as "The most significant [products] in Microsoft’s history" and "enablers to facilitate the maximum impact that our industry can have".  But what does that mean for you and I – the people that Microsoft likes to refer to with the H word who implement their technology in order to execute this change on an unsuspecting world?

I’ve written plenty here before about Windows Server 2008, but the 2008 global launch wave is about more than just Windows.  For years now, Microsoft has been telling us about dynamic IT and over the last few years we have seen many products that can help to deliver that vision.  The 2008 global launch wave is built around four areas:

  1. A secure and trusted foundation.
  2. Virtualisation.
  3. Web and developer productivity.
  4. Business intelligence (and user experience).

So, taking each of these one at a time, what do the 2008 products offer?

A secure and trusted foundation

Security and reliability are always touted as benefits for the latest version of any product, but in the case of Windows Server there are some real benefits.  The Server Core installation option results in a smaller codebase, meaning a reduced attack surface.  The modular design of IIS (and indeed the role-based architecture for Windows Server) means that only those components that are required are installed. Read-only domain controllers allow for secure deployment of directory servers in branch office situations that previously would have been a major security risk.

Availability is increased with enhancements to failover clustering (including new cluster validation tools), SQL data mirroring and the new resource governor functionality in SQL Server 2008 which allows resources to be allocated to specific workloads.

On the compliance and governance front, there is network access protection, federated rights management, and transparent SQL data encryption.

Microsoft is also keen to point out that their database platform has seen significantly fewer critical vulnerabilities in recent history than Oracle.

Finally, although not strictly security-related, Microsoft cites 40% of data centre costs relating to power and that Windows Server 2008 consumes 10% less power than previous versions of Windows Server, when running the same workload.

Virtualisation

Microsoft’s view on virtualisation is broader than just server virtualisation, encompassing not just the new Hyper-V role that will ship within 180 days of Windows Server 2008 release but also profile virtualisation (document redirection and offline files), client virtualisation (Vista Enterprise Centralised Desktop), application virtualisation (formerly SoftGrid) and presentation virtualisation (Terminal Services RemoteApp), all managed in one integrated, unified manner with System Center.

As for VMware‘s dominance of the server virtualisation space – I asked Larry Orecklin how Microsoft would combat customer perceptions around Microsoft’s lack of maturity in this space. His response was that "the proof is in the pudding" and that many customers are running Hyper-V in beta with positive feedback on performance, scalability and ease of use.  Microsoft UK Server Director, Bruce Lynn added that Hyper-V is actually the tenth virtualisation product that Microsoft has brought to market.

In Steve Ballmer’s keynote, he commented that [customers] have told Microsoft that virtualisation is too hard and too expensive – so Microsoft wants to "democratise virtualisation" – to switch from the current situation where less than 10% of servers are virtualised to a world where 90% are.  Their vision is for a scalable and performant hypervisor-based virtualisation platform, with minimal footprint, interoperability with competitive platforms, and integrated management tools.

Web and developer productivity

At the core of Windows Server 2008 is IIS 7.0 but Visual Studio extends the vision for developer productivity when creating rich web applications including support for AJAX, JavaScript IntelliSense, XAML, LINQ, entity-level data access and multi-targeting.

From a platform perspective, there are improvements around shared configuration, administrative delegation and scalability.

Combined with Silverlight for a rich user experience and Expression Blend (for designers to interact with developers on the same code), Microsoft believes that their platform is enabling customers to provide better performance, improved usability and a better experience for web-based applications.  It all looks good to me, but I’m yet to be convinced by Silverlight, or for that matter Adobe AIR – this all seems to me like a return to the days when every site had a Shockwave/Flash intro page and I’m like to see a greater emphasis on web standards.  Still, at least IIS has new support for running PHP without impacting on performance now – and Visual Studio includes improved CSS styling support.

Business intelligence

Ballmer highlighted that business intelligence (BI) is about letting users engage with applications – providing not just presentation but insight – getting at the data to provide business value.  Excel is still the most popular business intelligence tool, but combined with other products (e.g. SharePoint and PerformancePoint), the Microsoft BI story is strengthened.

SQL Server 2008 is at the core of the BI platform providing highly performant and scalable support for data warehousing with intelligence for both structured and unstructured data.  SQL Server reporting services integrates with Office applications and the ability to store spatial data opens new possibilities for data-driven applications (e.g. the combination of non-relational data and BI data to provide location awareness).

Putting it all together

So, that’s the marketing message – but what does this mean in practice?  Microsoft used a fictitious coffee company to illustrate what could be done with their technology but I was interested to hear what some of their TAP customers had been up to.  Here in the UK there were a number of presentations from well-known organisations that have used 2008 launch wave products to solve specific business issues.

easyJet have carried out a proof of concept that they hope to develop into an improved travel portal for their customers.  As a low-fares airline, you might expect anything more than the most basic website to be an expensive extravagance but far from it – 98% of easyJet’s customers book via the web, and if the conversion rate could be increased by 1% then that translates into £17m of revenue each year.

The easyJet proof of concept uses a Silverlight and AJAX front end to access Microsoft .NET 3.5 web services and SQL Server 2008.  Taking a starting point of, for example, London Luton, a user can select a date and see the lowest prices to all available destinations on a map.  Clicking through to a destination reveals a Microsoft Virtual Earth map with points of interest within a particular radius.  Streaming video is added to the mix, along with the ability to view hotel details using TripAdvisor and book online.

The proof of concept went from design to completion in just 6 weeks.  Windows Server 2008 provided IIS 7.0 with its modular design and simplified configuration.  SQL Server 2008 allowed the use of geospatial data.  And Visual Studio 2008 enhanced developer productivity, team collaboration and the overall user experience.

Next up was McLaren Electronic Systems, using SQL Server 2008 to store telemetry data transmitted in real time from Formula 1 racing cars.  With microwave signals bouncing off objects and data arriving out of sequence, the filestream feature allows data to be streamed into a relational database for fast access.  Tests have shown that for files above 2MB this technology will out-perform a traditional file system.  Formula 1 may sound a little specialised to relate to everyday business but as McLaren explained, a Formula 1 team will typically generate 3TB of data in a season.  That’s a similar volume to a financial services company, or a warehousing and logistics operation – so the technology is equally applicable to many market sectors.

The John Lewis Partnership is using Windows Server 2008 for its branch office infrastructure.  Having rolled out Windows Server 2003, they would like to reduce the number of servers (and the carbon footprint of their IT operations) at the same time as doubling the number of stores.  Security is another major consideration, with the possibility of data corruption if power is removed from a server and a security breach if a directory server is compromised.

By switching branch servers to Windows Server 2008 read-only domain controllers (DCs), John Lewis can combine the DCs with other branch office functions (print, DHCP, System Center Configuration Manager and Operations Manager) to remove one server from every store.  The reduction in replication traffic (AD replication is all one-way from the centre to the RODCs) allows for a reduction in data centre DCs too.  Windows Server 2008 also facilitates improved failover between data centres in a disaster recover scenario.  Other Windows Server technologies of interest to John Lewis include Server Core, 64-bit scalability and clustering.

The University of Cambridge is making use of the ability to store spatial data in SQL Server 2008 to apply modern computing to the investigation of 200 year-old theories on evolution.  And Visual Studio 2008 allowed the construction of the associated application in just 5 days.  As Professor John Parker and his self-confessed "database geek" sidekick, Dr Mark Whitehorn explained, technologies such as this are "allowing the scientific community to wake up to business intelligence".

Finally, the Rural Payments Agency (the UK government agency responsible for paying agricultural subsidies) is using Microsoft Application Virtualization and Terminal Services to provide an ultra-thin client desktop to resolve application conflicts and allow users to work from any desk.

Roadmap

Microsoft never tells us a great deal about the roadmap (at least not past the next year or so) but the 2008 launch wave includes a few more products yet.  Visual Studio 2008 and Windows Server 2008 have already shipped.  SQL Server 2008 will be available in the third quarter of 2008 (with a community technology preview today) and the Hyper-V role for Windows Server will ship within 180 days of Windows Server (although I have heard rumours it may be a lot closer than that).  In the summer we will see a new release of Windows Small Business Server as well as a new product for SMEs – Windows Essential Business Server – and, at the other end of the computing spectrum, Windows High Performance Computing Server.  Finally, a new version of Silverlight will ship at some point this year.

Summary

I may not be a fan of the HEROES happen {here} theme but that’s just marketing – I’ve made no secret of the fact that I think Windows Server 2008 is a great product.  I don’t have the same depth of experience to comment on Visual Studio or SQL Server but the customer presentations that I heard today add credence to Microsoft’s own scenario for a dynamic, agile, IT infrastructure to reduce the demands for maintenance of the infrastructure and drive out innovation to support the demands of modern business. 

Mark Wilson {United Kingdom}

Compiling C# code without access to Visual Studio

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m not a developer and as such I don’t have a copy of Visual Studio but this evening I needed to compile somebody else’s C# code to produce a dynamic link library (DLL) and call it from a Windows PowerShell script. Somewhere back in my distant past I recall using Turbo Pascal, Borland C++, early versions of Visual Basic and even Modula-2 to make/link/compile executables but I’ve never used a modern compiled language (even on Linux I avoid rolling my own code and opt for RPM-based installations). So I downloaded and installed Visual C# 2005 Express Edition (plus service pack 1, plus hotfix to make it run on Windows Vista).

Sadly that didn’t get me anywhere – I’m totally confused in the Visual Studio IDE and anyway, the instructions I had told me to access the Visual Studio command prompt and run csc /t:library filename.cs.

It turns out that the Visual Studio Express Editions don’t include the Visual Studio command prompt but in any case, the C# compiler (csc.exe) is not part of Visual Studio but comes with the Microsoft.NET framework (on my system it is available at %systemroot%\Microsoft.NET\Framework\v2.0.50727\). Once I discovered the whereabouts of the compiler, compiling the code was a straightforward operation.

As for what I did with the DLL and PowerShell, I’ll save that for another post.

Visual Studio Express Editions and Coding4Fun

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Visual Studio 2005 Express EditionsEven though I’m no developer, I have been known to knock up the occasional script and once upon a time I could even have considered myself to have programming skills…

…Well maybe now I can start to build them up again as Microsoft are offering the Visual Studio 2005 Express Editions for free. More details are available on the Microsoft website (check out the FAQ) but basically, for web developers there is Visual Web Developer 2005 Express Edition; for database developers there’s SQL Server 2005 Express Edition; and for Windows developers there are Visual Studio Express Editions for Visual Basic, Visual C#, Visual C++ and Visual J#. Microsoft is pitching these as “lightweight, easy-to-use, and easy-to-learn tools for the hobbyist, novice, and student developer” but there is nothing stopping them from being used in a corporate environment (aside from the reduced functionality feature sets).

For anyone (like me) who is new to coding, or returning after a break of several years, MSDN has a coding4fun microsite.

Microsoft developer road trip CD

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In a few days time, Microsoft are launching a whole host of products including Visual Studio 2005 and SQL Server 2005. To complement the launch events and associated webcasts, Microsoft have produced the Microsoft developer road trip CD – a 45 minute audio download featuring information on Visual Studio 2005, SQL Server 2005 and the .NET Framework 2.0 from Microsoft’s own developer and platform group experts.

Designed to be listened to on the move, either in the car or on your favourite media player, this should allow you to get up to speed on what these new product releases are all about in just a few short trips.