One of the reasons why OpenXML document formats are so useful

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written before the frustrations of working with OpenXML document formats on a Mac but this evening I found out that a lack of native support for these files can be very useful. I downloaded a .docx file that I wanted to lift some graphics from – of course, Mac OS X and Office 2004 for Mac didn’t recognise the file but it’s really just a .zip file and after letting StuffIt Expander work on the document, I was soon able to locate and extract the images that I wanted from the document! Very efficient.

Of course, this was on a Mac, but the same principle applies for a Windows or Linux (or even MS-DOS) PC. If you can find a utility that can read .zip files, it should have no problem extracting the constituent parts of an OpenXML document.

Where does SharePoint store its data?

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the things that’s always confused me about SharePoint is exactly where the data is held and I asked the question on my MOSS 2007 enterprise search course this week – this is what I found.

For each shared service provider (SSP), SharePoint has three main locations for data storage:

  • The full text index catalogue is a flat file. Created by index server(s), it consists of a list of keywords and document identifiers, along with mappings as to which keywords exist in which documents. This high level of abstraction between keywords and links allows the full text index catalog to typically be around 5-12% of the size of the data being indexed.
  • The document identifiers in the full text index catalogue point to the document URLs, stored in the search database, a SQL Server database that is usually named sharedserviceprovidername_Search_DB.MDF. As its name suggests, this literally stores information relating to searchable content on a per-SSP basis, including URLs, access control lists (ACLs) and managed property information. On a WSS system this is named WSS_Search_computername.MDF.
  • The third location is the search configuration (or content) database, which contains configuration information relating to items such as crawl rules, content sources and the definition of managed properties. On a WSS system this is named WSS_Content.MDF.

In addition to the above, there are a number of other elements to the SharePoint solution:

  • The SharePoint Central Administration content database (itself implemented as a WSS web application), named SharePoint_AdminContent_guid.MDF.
  • SharePoint_Config_guid.MDF, a database containing SharePoint configuration information.
  • The individual web (.aspx) pages and configuration files, along with XSLT transformations and other supporting files (generally XML-based).

Finally, the actual content that is indexed by SharePoint remains where it always was – i.e. in the file shares, document libraries, web sites, business systems, etc. that SharePoint is being used to search across.

(This information relates to WSS v3 and MOSS 2007 – other SharePoint versions may differ.)

Designing a SharePoint server infrastructure

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In yesterday’s post about warming up SharePoint, I hinted that there may be a few more SharePoint posts to follow, so here’s the first one – examining the various infrastructure considerations when designing a SharePoint server infrastructure.

These notes builds on the post I wrote a few months ago about planning and deploying MOSS 2007 and whilst there is not yet any prescriptive guidance from Microsoft, many of the SharePoint Portal Server 2003 topology restrictions with small, medium and large server farms are removed in MOSS, allowing customisation of the infrastructure to suit organisational requirements.

Looking at WSS first, the major limitations are: that it only supports indexing of a single site collection (although multiple content databases can be crawled – up to 100 per search server); that there is no ability to separate query and index server roles (indexing is automatic and search servers act as both query and index servers); and that a single web application can only access one search server. Consequently the options for scaling out are:

  1. Move content database(s) to a dedicated server.
  2. Provide a dedicated web front end server
  3. Provide additional front end web servers, load balanced.
  4. Provide additional search servers, noting the 1:1 relationship between web applications and serach servers.

Moving on to MOSS, there is more flexibility; however there is also more complexity due to the possibility of having to provide dedicated servers for particular application roles:

  • Whilst MOSS can run on a single server, this is not a recommended configuration and will probably only scale to a few 10s or maybe a 100 users.
  • By separating out the database, there will be less memory and disk contention and the solution should scale into the 100s of users. For high availability, the database may be clustered but this will not really aid performance as the cluster model is 2 node active-passive. Other high availability options include log shipping.
  • The next step is to separate some of the MOSS roles. Separating the web front end and query roles from the index server (there can only be one index server per shared service provider – generally one per farm, although there may be multiple SSPs for reasons such as security partitioning) should allow the solution to scale into 1000s of users and by providing additional web/query servers this may reach 10,000 collaboration and search users. There is some debate as to whether it’s better to provide a combined web front end and query server with a separate index server or a dedicated web front end server with a combined query and index server (as per WSS) but the former solution is generally recommended for a number of reasons:
    1. It aids manageability, allowing for the addition of further query servers at will (if the query and index server roles are combined, additional query servers cannot be provided).
    2. It reduces network traffic – if a search produces a large result set, this will be security-trimmed by the web front end server, resulting in needless data transfer; whereas if the web and query servers are combined then there is no data transfer.
    3. RAM contention between the web and query roles can be alleviated through the provision of additional memory.
  • Finally, separating the web front end and query roles (up to 32 of each per farm) should allow 10s of 1000s of users (details of Microsoft’s own implementation are available publically as an example of a large enterprise search deployment using SharePoint).

These are not the only possible configurations – the MOSS architecture is pretty flexible. It’s also worth noting that there is no need to load-balance query servers (the web servers will locate the appropriate query server) and there is no affinity between web and query servers (so if a query server goes offline then this should not cause a problem as long as other query servers exist within the infrastructure). The restriction on the number of index servers per SSP is a potential single point of failure; however there are ways of getting around this. One method is to configure multiple SSPs operating on the same data sources (one active and one additional), then to re-assign web applications in the event of the index server going offline; however this is less than ideal and it’s probably better to have a good disaster recovery strategy and to live with stale index data served from query servers whilst the index server is offline.

Other considerations include the provision of separate application servers for business data cataloguing (BDC) and Excel calculation services so as to minimise the impact on returning search results to end users. Another method of reducing the load on the primary (load balanced) web front end servers is to provide a dedicated web front end servers for use by the index server and, although this is an indexing bottleneck, it will ensure that indexing does not negatively impact performance for end users. It may also be worth considering the provision of a dedicated administration server for central administration (which will aid security and performance by not placing this on an end-user facing web server).

Moving on to the physical configuration of each server:

  • Web front end servers: Serving web pages is memory intensive as page output is cached until it is invalidated. Whilst there is some IO activity, this is rarely a bottleneck and if web front end servers are load-balanced then there is probably no need for a redundant disk configuration (at least not for performance reasons – it may still be desirable from an operational standpoint).
  • Application servers:
    • Query servers are memory bound and although an ideal configuration would load the entire index into RAM, a 1TB data source could be expected to result in a 120GB index file and that is a lot of memory to install in a server! (4GB is considered a practical minimum for a MOSS query server.) MOSS does employ an aggressive caching regime; however as the cache is invalidated by new data propagated from the index server or otherwise released to free up memory, there is a requirement for fast disk access (both read and write), so RAID 10 (also known as RAID 1+0 – mirrored and striped) is recommended.
    • Indexing is the most CPU-intensive task for a SharePoint server. Therefore, at index time, index servers will always be CPU bound (some application server roles may also be CPU-intensive, such as BDC and Excel calculation services, but indexing will always be more so at index time). Memory access can be controlled by configuring the number of parallel reads, but the disk subsystem will also be an important consideration and Microsoft recommends RAID 10 to provide the optimum disk performance. 4GB is considered the minimum amount of RAM that should be provided.
  • Database servers should follow the normal advice for SQL Server configuration

For all servers, 64-bit processors are recommended; however there are some limitations around the availability of 64-bit IFilters (and protocol handlers) so it may be necessary to run index servers using 32-bit software. Regardless of this, it is not recommended to mix 32- and 64-bit implementations within a single tier of the architecture.

In all scenarios, the main bottleneck will be the network – low latency links are required and so even though it may seem to make sense to place a web front end server close to the user population, this is not really desirable – HTTP is designed to work over WAN links but the web front end servers need to communicate with query servers using a combination of HTTP and RPC calls, and with the database server(s) using TCP sockets. As well as interrogating query servers for search results, web front end servers communicate directly with the database server (i.e. not with a query server) for access control list information (used for security trimming) and access to the property store.

One significant improvement for MOSS (relative to SharePoint Portal Server 2003) is the storage requirement. Whereas SPS 2003 required disk space equivalent to 50% of the corpus (the sum of all data being indexed) for indexing and 100% of the corpus for query – these data volumes are just not practical and MOSS 2007 reduces this to a full text catalogue index file size which is typically between 5 and 12% of the corpus size and a search database size equivalent to around 2KB per indexed document. Due to continuous index propagation (a new feature with MOSS that means the average time between indexing and search availability is between 3 and 27 seconds) and the merging of shadow index files, it is necessary to allow additional disk space for the index (2.5x times the maximum anticipated size of the index – so around 30% of the corpus volume but still a major improvement on SPS 2003). It’s also necessary to balance the freshness of the index with the time that it takes to crawl the corpus and so typically multiple crawl rules will be defined for different content sources. Estimates for corpus size should be based on:

  • Number of items
  • Storage used
  • Types of items
  • Security
  • Latency requirements
  • Connectivity
  • Estimated indexing window
  • Expected yearly growth

Regardless of the number of users, there is an practical limit of 50 million documents to be indexed, per server farm (except MOSS for Search standard edition, which has a hard-coded limit of 500,000 documents). The workaround for this is to provide multiple server farms and to federate; however there are some limitations of this approach if search results are ordered on relevance as the two sets of results cannot be reliably merged. The practical approach is to ensure that farms relate to logical business divisions (e.g. Americas, EMEA and Asia-Pacific) and to provide multiple columns (or multiple tabs) of search results with one for each farm. Note that SharePoint does allow SSPs to be shared between farms, so in this configuration the three separate farms could be used for indexing each regional corpus with a common SSP for global information such a user profiles. Each full MOSS farm can support up to 20 SSPs, although MOSS for Search is limited to a single SSP per farm.

Hopefully the information here is useful for anyone looking at implementing an enterprise search solution based on MOSS. The Microsoft SharePoint products and technologies team blog provides more information, as to various SharePoint experts around the ‘net. Information about MOSS 2007 is also available in the Office Server System technical library. The key messages are that:

  • Planning is important and a test/proof of concept environment can help with establishing a starting point topology, monitoring actual performance and capacity data to identify resource bottlenecks and then scaling up the available resources and scaling out server roles.
  • Post-deployment, search queries and results reporting can (and should) be used to identify areas that can benefit from further optimisation – going beyond the scope of this post (the infrastructure) and gaining an understanding of end-users’ search usage patterns (what were they searching for and was the search successful) with the aim of improving the overall search experience and improving productivity.

Credits

This post was based on information presented by Martin Harwar, a SharePoint expert at English Tiger (and formerly head of practice at CM group) who works closely with the product group in Redmond and recently published an article entitled “Find, don’t Search!” discussing how MOSS can be used to alleviate some of the issues associated with enterprise search.

Warming up SharePoint

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m spending a few days this week learning about how to implement an enterprise search solution using Microsoft Office SharePoint Server (MOSS) 2007 (so expect a few SharePoint-related posts to follow over the next few days). I was intrigued to note that the Virtual PC image used for the training course included a number of “warm up” scripts for SharePoint. Not having come across this before, I was intrigued as to their purpose but my instructor (Martin Harwar from English Tiger) explained that the principle is equally applicable to all ASP.NET applications and is about improving the end user experience.

Normally, ASP.NET applications are compiled on a just in time (JIT) basis and this means that the first page load after making changes such as resetting IIS, refreshing application pools or editing web.config can be very slow. By “warming up” the application (accessing key pages deliberately), JIT compilation is triggered, meaning that when an end user accesses the page then it is already compiled (and hence fast).

Joel Oleson’s SharePoint Land has a much better explaination as well as the actual MOSS warmup scripts for download. There are also some more comments on warming up SharePoint at Andrew Connell’s SharePoint developer tips and tricks site.

Incidentally, the scripts on Joel’s site are intended for MOSS but can be modified for WSS (and I understand that the principle may be equally applicable to other ASP.NET web applications).

Outsourcing syndicated content from WordPress to Feedburner without losing readers

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier, I wrote about some of the measures I’ve taken to reduce the bandwidth usage of this site, one of which is outsourcing the RSS feeds to FeedBurner (i.e. put them on Google’s bandwidth bill!).

The new feed location for syndicated content using either RSS or Atom is http://feeds.markwilson.co.uk/marksweblog/.

Hopefully, I’ve done everything that I need to to make sure that no-one has to make any changes in their feedreader – yesterday’s 176% growth in subscribers (according to the Feedburner stats, which are now picking up the traffic that was previously split across multiple feeds) certainly suggests that it’s all working!

FeedBurner FeedStats showing significant increase after consolidation of feeds

If all you want is the new address then there’s no need to read on; however as this is a technical blog, I thought that some people might be interested in how this all works.

Firstly, the feeds from the old Blogger version of this site (http://www.markwilson.co.uk/blog/atom.xml and http://www.markwilson.co.uk/blog/rss.xml) have permanent redirects (HTTP 301) in my .htaccess file to redirect clients to the equivalent WordPress locations. This has been working since the migration to WordPress back in March.

I’ve had a FeedBurner feed at http://www.feedburner.com/marksweblog/ for a few years now and this remains in place. It’s using FeedBurner’s SmartFeed technology to translates the feed on-the-fly into a format (RSS or Atom) compatible with the visiting client. Since FeedBurner have made their MyBrand service free, I’ve set up feeds.markwilson.co.uk as a DNS CNAME record, pointing to feeds.feedburner.com so basically http://www.feedburner.com/marksweblog/ and http://feeds.markwilson.co.uk/marksweblog/ are interchangeable (although there is no guarantee that I will always use FeedBurner, so the http://feeds.markwilson.co.uk/marksweblog/ address is preferable).

Because I needed to make sure that anyone using the standard WordPress feed locations listed below would be redirected to the new feed, I used the FeedBurner FeedSmith WordPress plugin to redirect readers to http://feeds.markwilson.co.uk/marksweblog/ from any of the following:

http://www.markwilson.co.uk/blog/feed/
http://www.markwilson.co.uk/blog/feed/atom/
http://www.markwilson.co.uk/blog/feed/rdf/
http://www.markwilson.co.uk/blog/feed/rss/
http://www.markwilson.co.uk/blog/feed/rss2/

For the time being, the per-post comment feeds are unchanged (very few people use them anyway).

The really smart thing that FeedSmith does is to redirect most clients to FeedBurner except if the user agent indicates that the request is from FeedBurner, in which case access is provided to the syndicated content from WordPress. This is shown in the extracts below from the logs offered by my hosting provider:

HTTP 307 (temporary redirect)

This request (from an Internet Explorer 7 client) receives a temporary redirect (HTTP 307) as can be seen in the results from the SEO Consultants check server headers tool:

SEO Consultants Directory Check Server Headers – Single URI Results
Current Date and Time: 2007-09-13T15:22:18-0700
User IP Address:
ipaddress

#1 Server Response: http://www.markwilson.co.uk/blog/feed/
HTTP Status Code: HTTP/1.1 307 Temporary Redirect

Date: Thu, 13 Sep 2007 22:22:03 GMT
Server: Apache/1.3.37 (Unix) mod_fastcgi/2.4.2 mod_auth_passthrough/1.8 mod_log_bytes/1.2 mod_bwlimited/1.4 PHP/4.4.7 FrontPage/5.0.2.2635.SR1.2 mod_ssl/2.8.28 OpenSSL/0.9.7e
X-Powered-By: PHP/4.4.7
Set-Cookie: bb2_screener_=1189722124+216.154.235.143; path=/blog/
X-Pingback: http://www.markwilson.co.uk/blog/xmlrpc.php
Last-Modified: Thu, 13 Sep 2007 21:40:23 GMT
ETag: “d7e58019e9dbb9623c54b0721b0e1f3c”
Location: http://feeds.markwilson.co.uk/marksweblog
Connection: close
Content-Type: text/html

HTTP 200 (OK)

Meanwhile FeedBurner receives an OK (HTTP 200) response and is served the full feed. The advantage to me is that each visitor who receives a redirect is served just 38 bytes from this website whereas the full feed (which varies in length according to the blog content) is considerably heavier (over 17KB based on the example above).

So far the most visible advantage to me is that I’ve consolidated all syndication into a single feed, upon which I have a variety of services running (or available). The as yet unseen advantage is the consequential reduction in the bandwidth taken up with syndicated content – with some feedreaders polling the feed several times a day, that should be a considerable saving.

Attempting to reduce my website’s bandwidth usage

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This website is in a spot of trouble. Over the last few months, I’ve seen the bandwidth usage grow dramatically although it seems to have grown faster than the number of subscribers/readers. We’re not talking vast volumes here and my hosting provider has been very understanding, but even so it’s time to do something about it.

So I had a think, and came up with three options:

  1. Don’t write anything. Tried that for half of June (when I was on holiday). No noticeable change in the webstats!
  2. Write rubbish. It’s debatable as to whether that’s a continuation of the status quo.
  3. Drp ll th vwls s f wrtng txt mssgs.
  4. Optimise the site to reduce bandwidth usage, without a major rewrite. Yeah! That sounds like a challenge.

So, option four sounded like the best course of action. There are two main elements to consider in this:

  1. Site performance (i.e. how fast pages load).
  2. Bandwidth usage.

As far as I can tell, my site performance is not blindingly fast but it’s OK. I could use something like the WP-Cache plugin to cache content but, although that should reduce the load on the server, it won’t actually decrease my bandwidth usage. In fact it might increase it as I’d need to turn off HTTP compression.

That led me to concentrate on the bandwidth issues. This is what I tried (based mostly on Jeff Atwood’s experience of reducing his site’s bandwidth usage):

  • Shut out the spammers. Akismet had blocked over 7000 spam messages in 15 days and each of these would have loaded pages and leeched some bandwidth in the process. Using the Bad Behaviour plugin started to reduce that, blocking IP known spammers based on their IP address. Hopefully it hasn’t blocked legitimate users too. Please let me know if it has blocked you (assuming you can read this!).
  • Compress the content. Check that HTTP compression is enabled for the site (it was). According to Port 80 Software’s real-time compression check, this both reduces my file size by about 77% and decreases download times by 410%. It’s also possible to compress (i.e. remove whitespace and comments) in CSS and JavaScript (as well as tools for HTML compression) but in my opinion, the benefits are slim (as these files are already compressed with HTTP compression) and code readability is more important to me (although at 12.7KB, my main stylesheet is a little on the bloated side of things – and it is one file that gets loaded frequently by clients).
  • Optimise the graphics. I already use Adobe Photoshop/ImageReady to save web optimised graphics but I used a Macintosh utility that Alex pointed me to called Ping to optimise the .PNG files that make up about half the graphics on this site (I still need to do something with the .JPGs and .GIFs) and that shaved just over 10% off their file size – not a huge reduction but it should help.
  • Outsource. Switching the main RSS feed to FeedBurner made me nervous. I’d rather have all my readers come to my domain than to one over which I have no control but then again FeedBurner gives me some great analysis tools. Then I found out about Feedburner’s MyBrand feature (previously a chargable option but free since Feedburner was acquired by Google) which lets me use feeds.markwilson.co.uk (i.e. a domain under my control) instead of feeds.feedburner.com. Combined with the FeedSmith plugin, this has let me keep control over all of my feeds. One more option is to use an external image provider (Jeff Atwood recommends Amazon S3 but I haven’t tried that yet).

At the moment it’s still early days but I do have a feeling that I’m not eating up my bandwidth quite as quickly as I was. I’ll watch the webstats over the coming days and weeks and hope to see a downward trend.

Working around AOL’s short-sighted antispam measures

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For the last year or so, I’ve been running my own mail server. This provides me with a number of advantages:

  • I have no storage limits (other than the physical limits of my hardware).
  • I have complete control over the mail server configuration.
  • I’m not at the mercy of my ISP’s e-mail server issues and delays if the mail queues get full
  • I can use my home system to try out new features and functionality.

It’s been working well. A few newsletters stopped arriving after I installed the Exchange Server intelligent message filter (IMF) and there is no way to whitelist addresses or subjects with the IMF that I’m aware of but, together with the realtime block lists that I use, it generally does a good job of trapping spam with only a few false positives.

I’ve been wondering for a while why there always seemed to be a problem with e-mailing AOL users but it’s such a rare occurrence that I didn’t get too hung up on it. Then I needed to send something over to someone and it bounced back (actually, AOL just refused to accept a connection from my mail server), so I started to look into the problem more closely.

I checked out AOL’s postmaster best practice guidelines and there didn’t seem to be a problem at first. I use a combination of Microsoft Exchange Server 2003 and Entourage 2004 so the mail should be RFC-compliant; I verified the connecting IP address (using AOL’s own tools); I checked that I have a valid reverse DNS entry (again, using AOL’s own tools); my mail server is not operating as an open relay; there were no links (let alone invalid ones) in the bounced messages and I have a static IP address on my business ADSL connection.

Having checked all of AOL’s technical guidelines for whitelisting I applied to join the AOL whitelist using AOL’s spam feedback form. AOL’s postmaster replied with the following:

AOL does not accept email sent directly from dynamic IPs. If you have a static IP, please contact your ISP and have them send us an updated list of dynamic and static IP ranges. If you are on a dynamic IP, please send through your ISP’s mail server.

It seems that the problem is my IP address. AOL’s senders’ FAQ states that:

Customers with residential IP addresses should use the provider’s SMTP servers and should not be sending email directly to another ISP’s SMTP servers.

I do have a static IP address and a business account (i.e. it is non-residential) but that’s not the point – AOL has the audacity to prevent anyone mailing them from an IP address that they have recorded on their database as residential! Admittedly my configuration is not normal but why should AOL dictate how I provide my e-mail service? I do understand that much of the world’s spam will originate from zombie-infected PCs on people’s home networks (i.e. residential) but I have an SPF record implemented in order to verify that e-mail purporting to originate from my mail server really is from my server. In any case, AOL’s reason for blocking me was nothing to do with authenticating my e-mail server but was purely based on their assertion that I have a residential IP address – something that doesn’t seem to bother any other mail hosting provider.

Not really wanting to get into the situation where my ISP says it’s AOL’s problem and AOL says it’s up to my ISP, I decided to work around the problem through reconfiguring Exchange Server:

  1. Firstly, I changed the cost on my existing SMTP connector (set to use DNS to route e-mail and using an address space of *) from 1 to 2.
  2. Next, I created a new SMTP connector for mail to be forwarded to my ISP’s relay, gave this a lower cost (1) and added aol.com to the address space.

Now e-mail for anyone@aol.com will be sent using my ISPs servers and all other external e-mail will go directly based on the MX records that are specified for the recipient’s domain. Amset IT solutions have a page on their website which explains the configuration in detail.

I needed a quick way of testing the message flow, so I sent a test message via my mail server to my e-mail address at work. Checking the headers on receipt showed that it had gone straight from my server to my employer’s e-mail gateway. Next, I added the domain name for my work e-mail address to the address spaces on the connector for e-mail to be routed via my ISP and repeated the test. Again, checking the headers verified that the message had indeed passed through my ISP’s relay. Finally, I took my employer’s domain name out of the address space on the new connector and verified that e-mail was directly routed once more.

It’s not difficult but it is a further complication in my mail server configuration, just to satisfy the requirements of one (admittedly large) ISP …and just one more reason for me to cringe when I hear that someone is an AOL subscriber.

Mounting virtual hard disks in Windows Vista

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Microsoft’s Virtual PC Guy (Ben Armstrong) wrote a blog post last year about using the VHDMount utility from Virtual Server 2005 R2 SP1 with a few registry edits to enable right-click mounting/dismounting of virtual hard disk (.VHD) files.

As .VHD files become ever more prevalent, this is a really useful capability (for example, Windows Vista’s Complete PC Backup functionality writes to a .VHD file).

The trouble is that, as supplied, Ben’s script does not work on Windows Vista as attempting to run vhdmount.exe will return:

Access Denied. Administrator permissions are needed to use the selected options. Use an elevated command prompt to complete these tasks.

An elevated command prompt is fine for entering commands directly (or by running a script) but what about Ben’s example of providing shell-integration to mount .VHDs from Explorer? Thankfully, as Steve Sinchak noted in TweakVista, Michael Murgolo wrote an article about elevating commands within scripts using a free PowerToy called elevate which is available from the Microsoft website. After downloading and extracting the elevate PowerToy scripts, I was able to confirm that they would let me run vhdmount.exe using the command elevate vhdmount.exe

Following that, I edited Ben Armstrong’s registry file to read:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Virtual.Machine.HD]

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Virtual.Machine.HD\shell]
@="Mount"

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Virtual.Machine.HD\shell\Dismount]

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Virtual.Machine.HD\shell\Dismount\command]
@="\"C:\\Program Files\\Script Elevation PowerToys\\elevate\" \"C:\\Program Files\\Microsoft Virtual Server\\Vhdmount\\vhdmount.exe\" /u /d \"%1\""

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Virtual.Machine.HD\shell\Mount]

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Virtual.Machine.HD\shell\Mount\command]
@="\"C:\\Program Files\\Script Elevation PowerToys\\elevate\" \"C:\\Program Files\\Microsoft Virtual Server\\Vhdmount\\vhdmount.exe\" /p \"%1\""

[HKEY_CLASSES_ROOT\.vhd]
@="Virtual.Machine.HD"

Note the /d switch in the dismount command. I had to use this (or /c) to allow the disk to be unmounted and avoid the following message:

The specified Virtual Hard Disk (VHD) is plugged in using the default Undo Disk option. Use /c to commit or /d to discard the changes to the mounted disk.

I chose the discard option as most of my .VHDs mounting is simply to extract files but others may prefer to commit.

A few more points to note about VHDMount:

A few Live Meeting tips

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve just spent the last couple of hours listening/watching a Live Meeting webcast. In recent weeks I’ve found that I’m attending more and more of these as part of the various Microsoft beta and technology adoption programmes that I’m participating in and frequently I need to take notes using Microsoft Office OneNote 2007 on the same PC that I’m using to view the slides and listen to the audio. Today I decided to try and connect to the webcast simultaneously from my Mac (i.e. using a second computer to view the slides whilst I write notes on the first) and I’m pleased to say that it worked using Microsoft Office Live Meeting Web Access (unfortunately the full Live Meeting client is required for VOIP audio but all I needed to do in this case was view the slides).

Although Live Meeting supports a pretty wide selection of browser and Java VM combinations, Firefox 2.0 on Mac OS X is not a supported browser/platform combination – the workaround is to use Safari and Apple Java (at least v1.4.1).

Here’s some of the advice and guidance that I’ve accumulated as I’ve been working on local (via Microsoft Office Communications Server 2007) and hosted Live Meeting calls over recent weeks (this is just what I’ve found and is not a comprehensive list):

  • The Microsoft Office Live Meeting 2007 Client can be downloaded from Microsoft Office Online.
    If the link in the meeting invitation doesn’t work, try launching the client and entering the details manually.
  • If colleagues can’t hear you on the meeting, check that your microphone is unmuted (the default is muted), that (if you are using a webcam) the microphone is close enough to pick up your voice and don’t assume that your notebook computer has a built-in microphone (this one stumped me for a while until I plugged in a microphone and everything jumped into life)!
  • (Microsoft Connect users may find the Live Meeting audio issues FAQ useful.)
  • The Live Meeting support website features a knowledge base for troubleshooting issues with Live Meeting.

Installing Microsoft Dynamics CRM without domain administrator rights

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently inherited the task of designing the infrastructure for a Microsoft Dynamics CRM 3.0 implementation. After being briefed by the consultancy partner that we are using for the application customisation and reading Microsoft’s implementation guide I was fairly comfortable with the basic principles but I was also alarmed that the product seems to require installation to be carried out using an account with Domain Admins permissions. There’s no way that I will be granted those rights on our corporate Active Directory (and nor should I be) – too many applications seem to require elevated permissions for service accounts and it makes life very difficult when trying to define a delegation policy for Active Directory administration.

Regardless of the assurances I was given that Domain Admins rights are only required to carry out the installation (and subsequent updates) and that the account can be relegated to a standard domain user afterwards, I felt that there must be a way around this – surely the groups that the CRM installation creates can be pre-staged somehow, or that a organizational unit can be created with delegated rights to create and manage objects?

It seems the answer to my question is yes – I’ve now been pointed in the direction of Microsoft knowledge base article 908984 which describes how to install Microsoft Dynamics CRM 3.0 as a user who is not a domain administrator by using the minimum required permissions.