New job at Fujitsu Services – no longer blogging at Conchango

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few years, I’ve been a consultant for a major IT Services company; worked for a UK-based support services company (and hated most of my time there); contracted for Australia’s largest independent magazine distributor; worked in-house designing and project managing a Europe-wide infrastructure refresh for a major fashion design, marketing and retail organisation; and then I joined Conchango, a mid-sized consultancy which specialises in delivering technology-driven business solutions that incorporate the latest methodologies and technologies.

I’ve worked with Conchango, first as a client and then as a consultant, for about 3 and a half years in total but the time has come for me to move on. For anybody who lives within commuting distance of London or Surrey, enjoys the variety of work which consultancy offers, and who knows a significant amount about enterprise intelligence, interactive media, agile development and program management, or mobility, Conchango is a fantastic place to work. It feels a bit strange to be leaving a company that I still enjoy and which is packed with talented people but as Conchango’s focus shifts away from infrastructure services, I’ve decided to rejoin Fujitsu Services (it was ICL when I was there just over 5 years ago) to embrace a new role as a Senior Customer Solutions Architect, taking technical responsibility for IT infrastructure projects within their Architecture and Design Group.

One of the things I’ve enjoyed most at Conchango (apart from being lucky enough to feature in the IT press) is that they encourage blogging (there’s a whole load of Conchango bloggers now) although my blog output has prompted some to comment on its volume and to say they almost expect to see what I had for breakfast appear next! One of my clients says he can find out what he’s been up to by reading these pages! I just hope that what I write is useful and that people enjoy reading it. Since last November, most of my posts here have been mirrored on my Conchango blog – from today, that will no longer be the case, and as far as I know, Fujitsu doesn’t have company-sponsored blogs, so this site is once again the single focus of my technology-related blogging (although I still hope to have the occasional article published on the Microsoft TechNet Industry Insiders blog).

I’ve got loads of stuff waiting for me to write about (but not much time to write it) – in the meantime, watch this space

Bad timing…

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For the last couple of days, the Microsoft File Transfer Manager has been running on one of my PCs, downloading 5.33Gb of Windows code name “Longhorn” and IE7 beta software from Microsoft Connect (averaging out at about 65kbps). Sometime last night, it all finally completed but then a few minutes ago, DHL delivered a package from Microsoft in Redmond containing… you guessed it… bootable DVDs of Windows Vista Professional Beta 1 and Windows Code Name “Longhorn” Beta 1. Arghhhhh!!!

“Incessant infrastructure and tech gossip”

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For a while now I’ve wanted a catchy subtitle for my blog (something descriptive, maybe with a touch of humour, and perhaps also a little bit thought provoking – like, for example, a “grey matter honeypot, distracting the mind with information overload“) but I’m just not witty enough to come up with one myself.

Well, now it looks like fellow Conchango blogger, Jamie Thomson, has come up with the goods for me in his latest post, where he describes my musings as “incessant infrastructure and tech gossip”. Well, it’s certainly descriptive!

More on integrating device drivers into an unattended Windows build

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier in the year I blogged about discovering unknown devices in Windows for integration into an unattended build. What I didn’t detail at the time was how to work out which device driver files are needed for a particular device.

Some device driver packages are pretty simple, but others are several megabytes in size. Rarely is the whole driver package required and it is usually sufficient to just copy a few files to the (RIS) installation source – generally:

  • An .INF (setup information) file with an associated .CAT (security catalog) file.
  • One or more .SYS (system) files.
  • Possibly some .EXE (application) and .DLL (application extension) files.

I usually start off by reading the setup information file which relates to the Windows XP version of a driver. It’s straightforward enough to identify the catalog file (used to confirm the digital signature for the other files) from the CatalogFile= line in the [Version] section and for many simple .INFs, it is easy to identify the device driver files from the [SourceDiskFiles] section, but sometimes the setup information file supports a variety of devices and not all of the files are required. For complex driver configurations (e.g. an ATI video driver), I usually copy the .INF and .CAT files to the installation source and then attempt to install the driver from a reference workstation. As Windows XP throws an error each time it is unable to locate a file, I add the requisite file to the installation source, retry and repeat until all the necessary files are present (which normally only takes a few minutes).

Some device drivers include a subfolder within the [SourceDiskNames] section. In this case you have a choice – either edit the .INF (not recommended as it will break the digital signature), or place the appropriate files (geneally all except the .INF and the .CAT into an appropriately named subfolder and extend the OemPnPDriversPath in the [Unattended] section of the unattended setup file.

One final note. In my unattended build, I have support for a variety of PC models, some of which use different drivers for what would seem to be identical hardware. For example, both the IBM ThinkPad T40 and the Compaq Evo N620c have an Agere Systems AC’97 Modem, but the driver version I downloaded from IBM and integrated into the build for the T40 was not recognised by the N620c (due to different PCI device instance IDs – the T40 implementation is PCI\VEN_8086&DEV_24C6&SUBSYS_05241014&REV_01\3&61AAA01&0&FE whilst the N620c is PCI\VEN_8086&DEV_24C6&SUBSYS_00580E11&REV_03\3&61AAA01&0&FE). I didn’t have access to a spare T40 in order to regression test a version of the driver with support for both device instances (assuming there is one), so I downloaded a version of the driver from HP, which does work with the N620c. Although the .INF and .CAT files are different, both the IBM and HP drivers have a number of files in common (AGRSM.SYS, AGRSMMSG.EXE and AGRSMDEL.EXE – albeit with different date and time stamps), so I left the newest versions of the common files in place (which happen to be the IBM versions). As I haven’t changed the files for the IBM driver, the T40 build should be fine, but the N620c build fails a check on the digital signature due to a mismatch between the file versions. There are two ways around this: either place the two driver versions into different folders and extend the OemPnPDriversPath in the [Unattended] section of the unattended setup file; or disable the check for signed drivers as detailed in Microsoft knowledge base article 314479.

Metro: read all about it!

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A while back, I read that Microsoft is switching to XML-based document formats in the next release of Microsoft Office and I just read some more…

According to PDFzone, Microsoft is developing a new document format (codenamed Metro), which is:

  • A new document file format, similar in many ways to portable document format (PDF).
  • A spool format, suitable for spooling to a device through the print subsystem.
  • A page description language, similar to printer control language (PCL) or PostScript, that can be used to transmit the information all the way down to a printer.

Apparently it’s all part of the WinFX API, being developed as part of Windows Vista but also due to be released for Windows XP and Server 2003 and according to Paul Thurrott’s Windows Vista FAQ:

    “Based on XML, Metro is to Windows Vista as Adobe PDF is to Mac OS X: It’s a device- and application-independent printing architecture that allows documents to retain their exact formatting in any application, and when printed. Unlike PDF, however, Metro is based on XML and will be released as an open standard. Metro will also incorporate ZIP technology – similar to that used by the next major version of Microsoft Office – to compress and decompress files on the fly. From a technology standpoint, Metro includes an XML-based electronic paper format called Metro Reach, a document viewer for viewing, managing, and printing Metro files, the ability to digitally sign Metro documents, APIs that allow programmers to integrate their applications and services with Metro, a print pipeline, and a new driver model for Metro-compatible printers.”

Finally, an open document standard that doesn’t require an expensive application license to produce a document (I’m guessing as it’s XML-based, I could write Metro documents using Windows Notepad, edit.com – or if I was feeling particularly masochistic, I could use edlin.exe or the UNIX vi editor!). It will be interesting to see how this new format compares with DocBook.

Brian Jones’ blog has more information and links about the Microsoft Office Open XML formats in Office 12.

Exchange Server RFC and standards compliance

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Members of the “oppose Microsoft group” often deride the software giant, accusing them of implementing proprietary technologies to abuse their monopoly; but in recent years there has been a real shift towards standards-based technology implementations in Microsoft software. Like Microsoft Exchange Server, the messaging and collaboration platform, which implements over 50 RFCs/standards, as detailed in Microsoft knowledge base article 262986.

MSI package for Mozilla Firefox 1.0

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Back in February, I posted a blog entry about installing applications silently (or at least quietly), e.g. as part of an unattended build process. Thomas Lee added a comment about WIX (Windows Installer XML), which I had not mentioned because at the time I was hoping to find some time to review WIX myself; although Thomas’ blog probably has some more information on the subject.

One of my “problem applications” when it come to automated builds is Mozilla Firefox, which for some reason doesn’t seem to support a silent installation (or didn’t last time I looked). Well, today I found the YVG Software Services Mozilla Firefox 1.0 installer – so now you can get a copy of Firefox packaged in Windows Installer (.MSI) format.

Making use of various iTunes for Windows plug-ins

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Since I became an iPod convert a few months back, I’ve ripped all of my CD albums to 192kbps MP3s using iTunes (over 5000 songs using 29Gb of disk space at the time of writing – still got about 500 CD singles, plus MiniDisc and vinyl to go…) but one of the features which really lets down iTunes is the lack of high quality visualizations (Windows Media Player has loads).

To help me with my quest to find decent iTunes for Windows visualizations, one of my MacMates, Stuart, sent me a link to the iLounge directory of known iTunes plug-ins. I’m still underwhelmed with the available visualizations but I came across some other interesting plug-ins, like WMPtunelog, which writes out information of the currently playing track to the registry (HKEY_CURRENT_USER\Software\Microsoft\MediaPlayer\CurrentMetaData\). I’d really like to access this from my homepage and include a real-time “what I’m listening to” panel, but I’m not sure how at the moment. The blogging plug-in within the Windows Media Player 9 Series creativity fun pack (which also works with Windows Media Player 10) was looking hopeful, until I found that the “code samples for adding support to my web, Visual Basic, or C/C++ application” link in the creativity_pack_readme.htm file was dead… If anybody has any hints, then please let me know!

One alternative may be Brandon Fuller’s Now Playing, which monitors the currently playing song and writes the information out to an XML file (optionally FTPing this to a location of your choice). Brandon uses PHP to process this on his site, but I’m having problems using PHP on my ISP’s servers (my ISP only allows active content to run on a separate server and I can’t seem to call the PHP from within normal HTML pages on their Apache web server) so am hoping that I can use an XSL transformation to format the XML instead but still not sure how to include that in the HTML…

All of this is a bit developery for an infrastructure bod like me but I’ll keep on plugging away with this and will post a comment to this post when (if) I get it all working. In the meantime, answers on a postcard please…

McAfee AntiVirus Enterprise/ePolicy Orchestrator tips and tricks

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last couple of months, I’ve been helping one of my clients to gain some control over their anti-virus infrastructure using McAfee VirusScan Enterprise and ePolicy Orchestrator (ePO).

I’m more used to Symantec AntiVirus Corporate Edition with its Symantec System Center Console, but ePO was easy to install (the installation wizard will install MDAC 2.7 if required as well as MSDE if there is no SQL Server available) and although it seems a bit complex to start with, once you get your head around how the ePO directory works (and how it can integrate with Active Directory) as well as the terminology (distributed repositories, rogue system detection sensors, notification rules, etc.) then it actually seems like quite a good product (although the HTTP-based administration console can be a bit flaky at times and ePO maintains its own set of security principals). The reporting tools seem pretty good too.

For anyone trying to get to grips with ePO, there is a whole heap of high-quality product documentation, but as a starting point, I recommend a look at the ePO quick reference card. Unfortunately I can’t link all of the documentation here as you need to have purchased the product to access that part of the McAfee/Network Associates website but it is available for download if you have a valid grant number (having said that, some quick googling has turned up a copy of the English version of the quick reference card on the Danish McAfee site).

One thing that I found particularly confusing was the change in where the McAfee AntiVirus Enterprise product writes its log files, once the ePO agent is enabled. Ordinarily, McAfee AntiVirus Enterprise writes log files to %allusersprofile%\Application Data\Network Associates\VirusScan\ with the main files of interest being onaccessscan.txt (used by the VirusScan On-Access Scan), ondemandscan.txt (used by the VirusScan On-Demand Scan) and updatelog.txt (used for updates via the VirusScan console). Depending on the configuration, and the version of McAfee Enterprise in use there may also be other log files in existence (e.g. accessprotectionlog.txt, bufferoverflowprotectionlog.txt and emailondeliverylog.txt).

This all changes once the ePO agent is activated as ePO stores its logs under %allusersprofile%\Application Data\Network Associates\Common Framework\. This folder actually contains a number of useful XML files, as well as mcscript.txt (which details script engine actions, such as processing updates), updatehistory.ini (which includes details of configuration items such as the site last used for updates); but even more useful is a file in the \Db subfolder which is named agent_%computername%.xml. Formatted using frameworklog.xsl, this is the McAfee Agent Activity log, which shows policy enforcement actions along with links to four more files in the same directory – the current and previous framework service logs (agent_%computername%.log and agent_%computername%_backup.log) and the current and previous Networks Associates product manager logs (prdmgr_%computername%.log and prdmgr_%computername%_backup.log).

Together, these logs are really useful for troubleshooting, like when a really out of date client wouldn’t update because the latest anti-virus signature (.DAT) file didn’t work with the version of the engine that was installed. One of my colleagues found a superDAT to solve that problem, but it was these logs which confirmed where the issue was.

Whilst on the subject of ePO, a few months back I blogged about adding policy pages to ePO.

So that’s it, a few tips and tricks for anybody implementing a McAfee-based anti-virus management solution.

SQL Server 2005 overview

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

As part of my recent quest to learn about SQL Server from an infrastructure perspective, I’ve been attending a number of events, one of which was Michael Platt‘s keynote at the May 2005 Microsoft Technical Roadshow – SQL Server 2005 for IT Pros.

In my opinion, SQL Server 2005 (codenamed Yukon) is probably Microsoft’s most significant product release since Windows 2000. Central to the data management functionality within the Microsoft platform, this new version of SQL Server, due for release in November, is a massive rewrite (around 3 million lines of C# code), including a huge number of new features with improvements in three key areas, all underpinned by new levels of security and reliability:

  • Developer productivity:
  • Visual Studio 2005 integration.
  • Enterprise data management:
    • SQL Server database.
    • SQL Server replication services.
  • Business intelligence (BI):
    • SQL Server integration services (SSIS) – replacing data transformation services (DTS).
    • SQL Server analytical services.
    • SQL Server reporting services.
    • SQL Server notification services.

    In terms of developer productivity, the Microsoft .NET Framework common language runtime (CLR) is now part of the SQL engine, allowing database-side coding using any Microsoft .NET language. XML support is no longer a bolt-on component, with XML now supported natively, feeding into the web services strategy, with integrated web services features including the new SQL Service Broker for messaging between SQL Server and other systems.

    Looking at enterprise data management: availability and security is enhanced with support for database mirroring and online operations as well as data security and privacy (encryption and enhanced auditing); manageability is improved with a self-tuning database, fast recovery and restore, and a host of management tools; and the platform scales to cover a huge range of devices from smartphones to massive 64-bit installations using technologies such as partitioning and snapshots to allow the system to scale in line with business growth.

    For business intelligence, all of the previously separate tools are pulled together into a single front-end management interface with two components – Management Studio for management and BI Studio for design – both of which are closely integrated with Visual Studio. SQL Server 2005 has more real-time analytical support built into the platform as well as a comprehensive extract, transform and load (ETL) capability for the entire enterprise, supporting heterogeneous databases.

    In the past, DTS used the database engine to carry out the transform whereas SSIS carries out the transform itself, with some standard transforms included, but also expandable with custom transformations. With SQL Server 2005, the SQL Server reporting services reporting solution is supplemented with a report builder for user generated reports and an multidimensional expressions (MDX) builder for developers; providing an interactive enterprise reporting environment, integrated with Office System applications . Analysis Services allows the integration of relational databases and OLAP cubes to cache the OLAP data in synchronisation with relational updates and perform analysis on real time data. Finally, data mining capabilities (previously the domain of expensive high-end technologies) become available for general purpose use in SQL Server 2005 offering exploration, pattern discovery and pattern description.

    SQL Server 2005 Platform

    In line with the Microsoft policy of using its new products internally (a process which they refer to, somewhat delightfully, as “dogfooding”), Microsoft is already using SQL Server 2005 heavily for its enterprise applications (e.g. their SAP R/3 1.7Tb database, the staging data warehouse for all Microsoft data and the Microsoft sales revenue and reporting system – basically all of Microsoft’s business-critical applications!). Looking at the metrics for just one of those applications – the SAP R/3 system in Redmond which handles Microsoft global financial, human resources, sales and distribution resources – this 1.7Tb database over 25 production servers services 2500 named users (over 57000 users in total with between 200 and 600 using the system concurrently) handling 300 000 SAP transactions a day and 100 000 batch jobs a month whilst maintaining greater than 99.9% SAP availability and less than 0.5 second response time – on beta code!

    In my opinion, SQL Server 2005 will be Microsoft’s most significant product release since Windows 2000 and is central to the Microsoft platform. The constant battle between Microsoft and Oracle will continue for some time to come but this new version of SQL Server (which has been a long time coming) might finally help to persuade IT Directors that Microsoft is a serious contender in the enterprise data management space.

    For more information about SQL Server 2005, I recommend the Microsoft TechNet SQL Server Tech Center, Mat Stephen’s blog, and Jamie Thomson’s blog (particularly for SSIS). SQL Server 2005 is due for release in November 2005, with four product editions – Express, Standard, Workgroup and Enterprise.