Installing a 32 bit OS on an EFI system using MDT 2012

A couple of months ago I blogged the method I am using to deploy VHD images side by side with a host OS via MDT 2012.  I have been fighting a battle with EFI based systems, namely HP 8200 Elite SFF, and thought I’d share my findings if for nothing other than my own future reference.

The deployment of my 32 bit VHD has been running fine on every other machine we have.  When it came time to test it in the lab that it will mainly be used in, I ran into some troubles.  After some digging around, I discovered that  32 bit Operating Systems cannot be installed on a GPT partitioned disk.  I noticed from within the Disk Management console, that every HP 8200 Elite that had been built using MDT had a GPT partition type, despite the TS explicitly selecting MBR.  This is because MDT 2012 is ‘smart enough’ to detect EFI systems and will automatically ignore any settings in the Format & Partition stage, and partition the disk as GPT.

This was very irratating.  I had a 64 bit host OS and a VHD that HAD to be 32 bit.  I could copy the VHD across without problem.  I could even add it to the BCDStore OK, but when I tried to boot from it, it would launch Windows RE.  This is due to the relocation of the BCDStore under EFI systems. It is no longer contained in the 100MB System Partition, but on NVRAM on the MotherBoard.

To try and over come this issue I tried several approaches.  Firstly I disable the Format step in the TS and edited the Unattend.xml, instructing it to format and partition the disk at this stage.  That failed as by the time the Unattend.xml file has been called, the TS has already begun saving things to the local disk, thus formatting the disk would break the deployment.

Next I wrote a diskpart script to format the disk, create the system reserved and the Windows partition, but this would lead me to more errors.

Finally, I decided to have one last search through the BIOS menu to find some hint of EFI that I could disable.  The only place where EFI is mentioned in the BIOS is within the Boot Order selection sub menu.  From this menu, I disabled EFI devices as boot options.  From within the TS, I re-enabled the Format and Partition step and tried to run the deployment again.  This time, with more success.

In the end, a fairly simple fix.  It’s funny how sometimes you can get carried away with elaborate patches and scripts upon scripts, when actually the fix is a simple one step process.  My finishing thought:  Having trouble installing a 32 bit OS via MDT 2012 to an EFI system, namely HP 8200 Elite?  Disable EFI devices from the boot menu!

Advertisements

Mac GUI Bootloader using rEFIt

As eluded to in a previous post, I used rEFIt to manage the GUI bootloader of our iMacs.  This simplifies the boot menu when dual booting an imac.

Working with rEFIt

rEFIt creates a GUI bootloader for Macs.  It is very easy to work with.  Simply install it and navigate to the /efi/refit folder and open refit.conf using the Texteditor.  The .conf is very well laid out and describes in sufficient detail which parts control what aspect of the GUI.

I have created a .bmp logo which I have dumped in the refit directory, referenced it in the .conf file and uncommented the line from the script.

For obvious safety reasons, I have uncommented the ‘disable all’ line from the .conf file so that the only options for booting the Mac is the internal disk.  By default all methods are displayed in the GUI meaning that users can boot from external USB, CD and from the network.  You can, if you prefer, select which of these items you’d like to disable if you prefer not to disable all of them.

You can select which partition or what Operating System takes priority in booting up.  Like the other options, the commented lines within the .conf give details instructions on this, but basically W=Windows, L=Linux, and so on.

Using Iceberg to redistribute Mac software

A continuation on the previous post of how to automate the Lion deploy process…

As with the last post, check the foot of the post for references.

Working with Iceberg Package Manager

Some software that we use can only be installed from the App center without the option to download first.  This makes it difficult to obtain an installer to roll out to client machines.  Luckily, with the help of Iceberg, this isn’t an issue.

Iceberg is able to lift most installed packages right out of the Applications directory and package them into a .pkg ready for redistribution.  Some packages aren’t as simple as that and we will get in to those later in the section.  For now let’s focus on the simple…

Download and install Iceberg, which you can grab from here:

http://www.macupdate.com/app/mac/14516/iceberg

Launch Iceberg and from the toolbar along the top of the screen, select File>New project.  For a basic package, like our example of Texteditor, we can stick with the core template package.  Name the package appropriately and continue to the next screen.

Iceberg New Project Wizard

In the new window, expand the name of the package from the left hand pane and select Files.  In the central pane, right click Applications and select Add Files…

Add files to the Project

Navigate to the location of the installed application and double click it.  Press OK to the popup about absolute paths, if applicable.  You should know see the application under the Applications header.

Adding files to the project

Next, click on the Settings tab from the left hand pane.  Here you can amend the details of the package to reflect that of your own business.

Change the default name to something appropriate to you

From the toolbar along the top of the screen, select Build and Build again.  Close the window when it is complete.  By default Iceberg saves the built pkg to the current users home area.  The directory is named after the package itself.  The .pkg will be in the sub directory named Build.

As I mentioned earlier, this method works for the majority of applications, but not all of them.  Some applications that I found NOT to work this way were Xcode, Office 2011 and McAfee 1.1.

If you skip down to the references at the bottom of this document, you will get a good example from another blogger detailing the process to repackage Office 2011.  I found that this was didn’t work very well with Xcode 4.3.2 so had to go about that another way.  Also McAfee I couldn’t repackage at all using Iceberg as the main .mpkg calls upon a further four .pkgs all with their own post and preflight scripts.

Deploying McAfee

The McAfee .mpkg invokes four smaller .pkg’s making it quite difficult to automate.  The way I found to get past this was to copy the installer to the client machine using the file copy task in DS.  Then to a script that runs at the end of the workflow, I added the following line:

Sudo installer –pkg /users/administrator/desktop/mcafee.mpkg –target /

It’s not the cleanest method, but it works.  I then have a cleanup script that runs after this one which deletes the .mpkg.

Deploying Xcode 4.3.2

This was the most challenging package I faced when configuring the deployment.  Like McAfee, the .mpkg for Xcode invokes another .pkg, Mobile Device Framework.  The only way I was able to get around this was to split this up into two packages.  Package one installs the Mobile Device Framework, and the second installs Xcode.

Before we can look at any of this, we will need the download.  To get this, you need to create a free developers account and login to the following site:

https://developer.apple.com/downloads/index.action 

Once the installer has finished downloading, mount the .dmg.  Right click on Xcode and select Show Package Contents and navigate to Xcode>Contents>Resources>Packages and copy MobileDevice.pkg to the desktop for easy access.

Launch Iceberg, and like before create a new project.  This time, we will be creating a Darwin Package so select the appropriate package type from the menu and name the project.  Before completing the package, I usually copy the MobileDevice.pkg to the newly created project folder.  This keeps everything together should I need it again at a later date.

From the project window, expand the package by clicking the down arrow on the item in the left hand pane.  From the list, click on Scripts.  Simply drag and drop the MobileDevice.pkg into the bottom window and ensure the checkbox to the left of it is ticked.

Adding the Mobile Device Framework to the Project

The next thing we need to do is create a postflight script and link it to the Installation Scripts window as shown in the above screenshot.  The script takes the form of:

#!/bin/sh

 

Sudo installer –pkg “$1/Contents/Resources/MobileDevice.pkg” –target /

Once you have written the script, name it postflight and verify that it doesn’t have an extension of any kind, then make it an executable by running the following command in the terminal:

Sudo chmod a+x /path/to/postflight

Copy the newly created script to the project directory, for the same reasons that we copied the MobileDevice.pkg there earlier.  Now drag and drop to the postflight area of the Installation scripts of the project window, ensuring that the checkbox to the left of postflight is ticked.

Once you have done this, build the project and leave it to one side.  Now we need to create the package for Xcode itself.  To do this we need to install Xcode on a test machine and be sure to install any extras like iOS simulators from Xcode>Preferences>Downloads.

When you are happy with the Xcode installation, we need to run a small command from the terminal to ensure permissions are correctly set before we can capture it:

Sudo chown –R root:wheel /Applications/Xcode.app

Launch iceberg and create a new project, same as last time – a Darwin package.  Expand the package so that you can see the Files tab.  Under Applications, rightclick and select Add Files.  Within the new window that pops up select Applications and choose Xcode.  That’s all there is to it!  Build the project and move both project directories to your deployment server’s Packages repository.

As mentioned earlier, I took a lot of influence from Mr derflounder’s blog when trying to package Office 2011 and Xcode 4.3.2.  I highly recommend a thorough read of his comments before undertaking these deployments.  You can find links to his blog at the bottom of this post.

 

References

Download the installer for Xcode from the developers site here:

https://developer.apple.com/downloads/index.action

Iceberg User guide:

http://s.sudre.free.fr/Software/documentation/Iceberg/English.lproj/documentation/index.html

 

I had a lot of ideas and help from this insightful blog.  Here is a link to one of his posts, which helped me repackage Office 2011:

http://derflounder.wordpress.com/2011/06/24/re-packaging-metapackages-with-iceberg/

Another useful post from derflounder:

http://derflounder.wordpress.com/2012/02/17/building-a-grand-unified-xcode-4-3-installer/

 

 

 

Deploying Lion & Windows 7 to iMacs

Well I’ve been pretty busy lately working with Mac’s which is something I don’t do very often.  We have finally gotten round to automating the rollout of our Mac lab.  I used a combination of tools as we dual booted the iMacs with Lion and Windows 7.  To deploy the Mac side of things, I used DeployStudio.  I used rEFIt to manage a GUI bootloader to switch between Lion and Win 7.  To rollout Win 7, I used an altered Task Sequence from MDT 2012.  Here is how I managed to get it all working…

Please see the foot of this post for references…

The theory

Deploystudio is a fairly straightforward package.  Firstly, you create your deploy server via the DS assistant.  Then create a netboot image – this is basically the mac version of a WinPE though a lot larger in size, nearly 2.5 GB!  The netboot image is a striped down version of the Mac OS and is what we boot into when running a workflow.  As well as giving us access to the features of DS, we can also access tools found in Utilities; disk and network utilities, terminal, etc.

From the DS admin tool, we can amend our workflows and create new ones.  We can add packages for deployment and we can monitor the ongoing deployment tasks to clients.

When it comes to automating the installation of applications we can use the Iceberg package manager running on a target machine, to capture the apps and then copy them to the deployment server.  Some packages require more work than others, hence in our main deployment workflow we have multiple package installation bundles as some software is happy to be installed from the netboot image, others require to be installed at next boot.

As we are dual booting the iMacs, we will be using rEFIt to manage a GUI boot loader, giving the user a graphical interface in which to select whether the mac boot into Lion or Windows 7.

Our master workflow will partition the drive in a way, which will allow us to install Windows 7 after the successful installation of Lion.

The Deployment process

Once we have the server setup we can turn our attention to the deploy admin utility and begin the deployment process.  The deployment process itself couldn’t be simpler.  Much the same as MDT 2012, we will boot to the netboot image and select the workflow we want to run.  Once we click the play button, the workflow does its thing and requires as much interaction from the user as defined in the workflow configuration.

Before delving into that, we’ll have a look at how to create a Deployment server.  The instructions for setting up and configuring the server have been influenced by the instructions within the DeployStudio manual, which is referenced at the end of this document.

Configuring the server pre DeployStudio

DeployStudio should be installed on a server platform; any form of Leopard or Lion will suffice.  We will be making some changes to the services running on the server.  To do so, open Server Admin.

DeployStudio, DS, doesn’t work well across subnets so you need to ensure that the server you are installing to is on the same subnet as the machines you will be deploying to.  You need to ensure you have a static IP and if the server itself does not have DNS running, then it should have knowledge of the appropriate DNS server.

Only AFP and Netboot services need to be configured, OpenDirectory can also be configured but is not a requirement.

AFP, Apple File Protocol, allows us to share images scripts and files across the network.  All we need to do with this service is ensure that Guest access is set to default and then start the service.

We cannot configure the Netboot service until we have created a netboot image, but we will discuss what has to be configured here.  With Netboot selected in the left hand pane, select Settings>General.  Check Ethernet, ensuring that you select the correct Ethernet port if you have multiple Network ports on your server.

Netboot Settings from Server Admin

Netboot Settings from Server Admin

From the bottom of the main window, check off the image and data locations.  Ensure that you SAVE NOW, as you won’t be able to complete the following step if you haven’t done so.

From the tab along the top of the main window, select images.  Verify that the netboot image that you created using the DeployStudio Assistant is visible.  Select it, save and start the service.

From Workgroup Manager, create a deployment admin account.  This is the account that will be used to login to the netboot image and run the workflows.  It is important to ensure the password is very secure as with any admin account.

Back on the Server admin, we need to ensure that the admin account just created has correct permissions to the Public directory as this is where the DeployStudio repositories will be stored.  To do this, select the server from the left hand pane and choose File Sharing from the top of the main window.  Select share points and then select Public.  Under the Permissions section of the window, use the + button to add the newly created deployment admin account to the permissions manager.  Ensure that the admin has Read & Write permissions, save and exit.

Setting the permissions for the Deploy Admin account

Setting the permissions for the Deploy Admin account

With public still selected, click browse and then New Folder and call it DeployStudio.

Installing and configuring DeployStudio

Download Deploystudio from here: http://www.deploystudio.com/Home.html.  Once you have it downloaded, go ahead and install it.

Once installed, open DeployStudio Assistant which can be found in /Applications/Utilities.  From the list of options, select Setup a DeployStudio server.

DeployStudio Assistant

In the next window enter the server address followed by the port number, which will look something like this:

https://ServerName.com:60443  <– Default port number.

The username and password will be the account that we created in the last stage, the deployment admin user.

In the next window select whether this server will be a master or a replicator.  Select the appropriate option and move onto the next screen.

This window is for the repository settings.  Ensure that you select Network Sharepoint so that the repository is accessible throughout the network.  Continue to the next screen.

Create the path to the repository, which if you followed the instructions when configuring the file sharing settings for the server, will be under Public.  The syntax for this field will look similar to this:

Afp://ServerName.com/Public

Enter the username and password for the deployment admin account and continue on to the next screen.

Unless you require email notification, you can skip past this screen.

The network security window; ensure that com.deploystudio.server is highlighted from the drop down menu.  Select the appropriate interface for your setup and leave the port number as the default 60443.  Verify that the Reject Unkown computers box is UNCHECKED.  Continue to the next screen.

This is the user group window.  If you require more stringent security settings, you can only allow access to specific user groups.  These groups must be created in the Workgroup manager although users can be added to the groups at any time.  Continue to next screen.

This screen allows you to enable or disable Multicast.  Even if your network is incapable of working with multicast, it is not advised to disable it at this point as you will not be able to change it at a later date.  It’s best practice to leave this enabled.  Continue to update the server settings and hit OK when the process is complete.  The server has been configured, next up is the creation of the netboot image.

Creating a netboot image

Launch the DeployStudio Assistant and select Create a DeployStudio Netboot set from the menu.  Read the service information window and continue to the next screen.

Netboot Creation Window: Name the image

Give the system a sensible name, UID, Language and time server.  Continue to the next screen.

This windows allows you to specify a particular server to connect to as priority, along with an alternative – if you were to have a master and replicator DS server.  The syntax this takes is as follows:

https://ServerName.com:60443

The authentication screen allows you to enter a default username and password for logging into the netboot image.  Be warned that entering a default user and filling in the password field means that any time the netboot image is loaded, these fields will be pre-populated and the operator will be able to launch which ever workflows they wish.

To get around this, you can simply disable the netboot service from the Server Admin tool.  This is the advised approach regardless of pre-populating the user and password fields.

There is also an option here for a VNC username and password.  This is here so that admins can remotely connect to the netbooted client and view the status of the deployment.

Once you have made your choices here, move on to the next screen.

The options window allows you to select what extras you want to include into the netboot image.  Some package installers require the use of Python or Ruby so it may be an idea to include those in your image.  Make your choices and move to the final screen.

It’s important to leave the destination location as the default:

/Library/NetBoot/NetBootSP0  <– that’s SP zero, not an O.

If the location doesn’t already exist it will be created.  Click continue and that’s the netboot image complete.

##Important:  Once the netboot image is ready you will need to start the netboot service from the server admin.  This process was described in the previous section; Configuring the server pre DeployStudio.##

An overview of the DeployStudio Admin tool

The DS Admin tool allows us to create the workflows, which we will use to deploy Lion, packages, scripts files and folders, etc. to our client machines.  Down the left hand side of the utility we have various tabs.

Clicking on Activity will display currently running workflows, their progress and client name.

The Computers tab shows the known computers to the DS, this list can be pre-populated by use of serial number or MAC address.  The advantage of this being that we can assign computer names, IP addresses and other things to specific MAC addresses/serial numbers.

The workflows tab will be the most frequented tab of the utility.  This is where we can create our own workflows, as depicted in the below screenshot:

An Overview of the DeployStudio Admin tool

The featured workflow in the screenshot is fairly complex, partitioning the disk, installing the OS, copying files, installing software, and finally running the software update utility when the system boots in for the first time.  We will look more at building workflows in a later section.

The Masters tab lists the captured and ready to be rolled out Lion images.

The Packages tab details all of the packages that have been uploaded to the DS repository.  It also lists any package sets that will install multiple packages in the one shot.

The final tab, Scripts, displays all of the stored scripts in the DS repository.  It also allows for you to create your own shell scripts for deployment.

Workflows

The master workflow that we will be using contains a partitioning step, which will ready the drive for the installation of Lion and Windows 7.  It is important that Windows 7 is installed last.

By default there are a couple of workflows already created.  Some of these we will use; some of them can be removed from the list.  The ‘Create a master from a volume’ is the workflow used to capture a Lion image on the target machine, which will be used as the master base image of which will be installed on all Macs.  This is ready to go right out of the box and requires no adjusting.

The recovery workflow is a custom one I made that will reinstall a working thin Lion image.  Most useful when testing software installs and needing a fresh image without any software pre installed.

To create a new workflow, have the workflows tab selected in the left pane and then click on the plus symbol in the middle pane at the bottom of the window.

To add new tasks to the workflow, click on the plus sign next to the text ‘drag tasks here’.  This action will open a third pane to the right of the central pane.  This pane shows all of the tasks that can be added to a workflow.  Some examples of these tasks would be install a package, restore an image, create an image, partition a disk and so on.

The documentation for DS is pretty extensive so I won’t go into too much depth here, we’ll just look at a few of the main tasks that can be incorporated into a workflow.

Partition

As the title suggests, this task will partition the drive.  You have the choice of customizing the partition yourself but the task comes with some default options.  For example, you can format the drive for a single Mac OS installation or you can set it up for a dual installation of Mac and Windows or even a triple boot Mac, Windows and Linux!

Adding the Partition task to a workflow

When choosing a dual or triple boot partition using the sliders, you can easily adjust the size of each partition.  Mousing over the partition will display a pencil icon in the upper right corner of the partition which allows you to edit the title of said partition.

From the screenshot above you will note a couple of other things.  The first is that you have the option of choosing the target disk for this action to be performed on.  To fully automate the deployment process, it can be a good idea to change this from ‘user selected’ to first available disk.

The second item to take note of is the check box at the bottom of the window labeled ‘Automate’.  Checking this box will automate this step and require no interaction from the user.

Restore

This task restores an image to a partition.  As you’ll see below, there are many options for configuring on this task.

Adding the Restore task to a workflow

From now on, the target volume can be set to ‘previous task target’.  We need to select the type of file system the image was created in, usually HFS and then obviously select the image to be used.  These images, or masters as DS refers to them as, are the images that would be captured using the default ‘Create a master from a volume’ workflow.

The rest of the options are pretty self-explanatory.  We can change the name of the volume if we want to, but seeing as we named it something sensible in the previous Partition task there is no need to rename.  The only option I changed here was to uncheck the recovery partition checkbox, as we are unable to work with multicasting.

Ensure that the automate checkbox is ticked and move on to the next task.

Hostname Form

This task is optional.  It allows you to name the machine.  If you have prepopulated the computers tab within DS with either the serial number or MAC addresses of the Macs then when this task is displayed, the name given in the Computer tab will already appear as default name.

Unfortunately you are unable to automate this task.

Package Install

As the name suggests, this task is for the installation of software packages.   You can either install singular packages or use a package set to install multiple items.

This task is fairly straightforward.  As with every other task, select the drive you wish to install the software to and then select the software item or the software set to be deployed.  As you can see from the screenshot, you have the usual check box for automate, and a second one for postponing the item/set until the first boot.  Selecting this check box will copy the installers onto a temp directory on the local disk and will invoke them upon the next reboot.  You also have a check box to ignore failures, this can be useful if an item installs but has some loss of functionality due to lack of Internet connection or user interaction.

Adding a Package to the workflow

To create a package set select the Packages item from the left hand windowpane and click the plus sign from the bottom of the pane.

The packages tab from the DS Admin tool

You can now add packages to these sets by dragging items from the main packages tab and dropping them on the name of the set you wish to add them too.  We’ll look at how to add software to the DS server in the section titled Repositories.

Repositories

By default, the DS repositories are stored in /Shared Items/Public/DeployStudio/.

The repository directory

This is where we need to place items that will be used within the deployment.  There are a few folders to take note of; Packages, Files and Scripts.  These all link back to the DS Admin tool we just looked at.  To import software we need to have the .pkg/.mpkg’s stored in the packages folder.  Similarly, if we want to use the scripts or the copy files tasks in our workflow, we’d need to have the files and scripts located in the appropriate folders.

Installing Windows 7

We will be using MDT 2012 to install Windows 7.  I have created a specific Windows to mac Task Sequence which runs much the same as any other TS.  The TS runs a diskpart command which tells setup to install to the third partition as Mac uses the first.  The only other change to note here is that the Bootcamp drivers could not be deployed conventionally as Out-of-the-box drivers, but were instead deployed as an application.

Apart from these two modifications, the roll out is the same as any other.

References

DeployStudio has a very detailed guide that includes walkthroughs.  It can be found here:

http://web.me.com/driley/web/deploystudio_files/DeployStudio_Guide.pdf

Replacing Screen inverter in an HP Pavillion dv2500

This week I have been looking at an HP Pavillion dv2500 that had an intermittent backlight.  The backlight would flicker on if you opened and closed the screen, leading me to believe it was the LCD inverter cable that sits in the hinge of the screen.  After taking the laptop apart and replacing the cable the problem persisted.  The only solution remaining, aside from replacing the screen, was to swap out the actual inverter board.  A fairly simple process:

1.  Remove the screw covers on the laptop lid, and then remove the screws.

2. Using a credit card or something similar, snap open the lid.

3.  Remove the lid, exposing the screen and the inverter which sits between the hinges underneath the screen.

4.  Unscrew the two screws to the left and right of the board, and unplug the LCD cable and power lead.

5.  Replace the inverter board and reassemble the laptop.

 

Sorry for the lack of photographs in this post, I had to do this job in a rush.

Samsung NP R522 screen replacement guide

I recently received a Samsung R522 from a customer that had a pretty beaten up screen.  Screen replacements are generally an easy fix, but I thought I’d post a quick run through here for anyone in doubt.

Step 1: Remove screw covers

Using a flat head screw driver, gently remove the screw covers dotted around the periphery of the screen there are 6 in total.

Screw Covers

Carefully remove screw covers

Step 2: Unscrew screen casing

Once you have removed the screw covers you can unscrew the the screws around the screen casing.  Remember to put them in a safe place like a glass or something as they are tiny and very easy to lose!

Step 3:  Gently separate the casing from screen

Using a credit card or something similar ( maybe something of less valus incase you snap your card ), very gently separate the case from the screen.  Do this by inserting the card in the gap between front and back of casing and easing around the entire casing hood.

Separate the case from the screen

Once you have worked your way around the case, gently separate the plastic lid from the screen and the back of the casing.

Ease the plastic lid from the screen and back of casing

Step 4: Unscrew screen from brackets

The screen is held in place by two brackets, one on either side of the screen.  It is attached by 3 screws on each bracket.  Locate and unscrew them.

Unscrew screen from brackets

Step 5:  Remove ribbon cable from back of screen

Now that you have access to the back of the screen, gently unstick the yellow tape holding the ribbon in place and then slide the ribbon connector down to release it from the screen.

Remove ribbon cable

Step 6: Out with the old, in with the new!

You can now safely remove the old screen and replace it with the new one.

Step 7:  Reattach the cable & attach to bracket

Simply do the reverse of the 2 last steps!

Step 8:  Screw in the plastic lid & boot up laptop

Screw the lid back onto the casing of the screen and cover the screw heads with the screw covers you removed in the first step.  Voila!  You’ve just replaced your screen!  Boot up and make sure that all is well.

It works!

Using MDT 2012 RC1 to deploy a VHD to a host machine

Over the past few weeks I have been playing around with MDT 2012 RC1, specifically the new Deploy to VHD feature.  Working in an educational institute, we have a need for VHDs for our students to practice using software that can’t be found on the regular desktop.  The ability to roll out a VHD to a host machine is very appealing and a welcomed addition to the MDT deployment toolkit.

I had a look through the documentation and saw that the new feature uses a new Task Sequence to deploy an OS to a VHD file and add it to the BCD Store so that you may boot into the VHD as you would any OS.

This wasn’t exactly what I was after.  I already have a fully working VHD that has all the software preloaded, I just want to roll out a VHD file to the host machine without it formatting the entire HD.  After some digging around on the Internet and a quick trip to the deployment forums over at Microsoft, I came to the conclusion that the Task Sequence was going to need some serious tweaking to get this to work.

So, what exactly am I trying to achieve here?

I need to deploy an OS and applications to a host machine, as well as copy across a fully working VHD and add it to the BCD Store.  Simple.

Walkthrough

Before delving right into the walkthrough, let’s look at some important background information…

We are going to be calling a script from the deployment share called ZTIVHDCreate.wsf, found in the Scripts directory in the root of the share.  This script has lots of variables that we can use when calling the script, these are show in the below screenshot and listed underneath:

**VHDInputVariable [ Input ]  – Name of Variable containing the target for the VHD file.

**VHDOutputVariable [ Input ]  – Name of variable to receive New Disk Index (Typically OSDDiskIndex for ZTIDiskPart.wsf).

VHDDisks  [Output ]  – Partition Index

VHDCreateFileName (Optional) – Name of VHD File to create (“RANDOM” or “” to auto generate name)

VHDCreateDiffVHD (Optional) – Name of VHD Differencing Disk ( “RANDOM” to auto generate name)

VHDCreateSource (Optional)  – Name of VHD File source to prepopulate the new VHD

VHDCreateSizeMax (Optional)  – Maximum Size (in MB) of the VHD file (Default: 90% of parent disk)

VHDCreateType  (Optional)  – Creation Type of the VHD Disk Either FIXED or EXPANDABLE (Default: EXPANDABLE)

** Required.

This walkthrough assumes that you have a working knowledge of MDT.  It also assumes that you have a VHD already created and ready for deployment.

Step 1: Set up test environment

For a test environment, it was suggested by Johan Arwidmark – a prolific deployment guy, to use a Virtual Image and to take a snapshot before deploying the VHD.  This will allow you to restore the snapshot if the VHD deployment doesn’t go as planned.  I used Virtual Box to create a Windows 7 image which I then installed guest additions and sorted out the networking.  At this stage, take a snapshot.

Step 2: Store the VHD on the deployment share and map to the share

Seeing as you will be accessing the VHD during deployment, it seems logical to store it on the deployment share itself.  I chose to create a directory called VHDs and dump it in there.  Now that we have the VHD somewhere we can access it, we need to map a drive to the share from our test environment.  Do this from an elevated command prompt using the net use command as the scripts we run later require admin privileges:

Net use V: \\Name of server\Name of share

Eg.  Net use V: \\DeploymentServer\Deploymentshare$

Now that we have the VHD stored in a central location and have a drive mapped to that location, we are ready to run the script.

Step 3: Call deployment script

As mentioned at the top of this walkthrough, we will be calling the ZTIVHDCreate.wsf script to deploy the VHD.  This script is worth a read through so that you understand what it is you are about to run.  When calling the scripts, we can use variables to give additional details to the deployment.  Some of these variables are optional, but some are requirements.  Those that are required are denoted by a double asterisk at the top of the guide.  The basic, stripped down command looks like this:

Cscript.exe ZTIVHDCreate.wsf /VHDInputVariable:VHDTargetDisk /VHDOutputVariable:OSDDiskIndex /DeploymentType:NEWCOMPUTER

You’ll notice the additional parameter, DeploymentType.  This is a required parameter but for some reason has been omitted from the script documentation.  This is what the command looked for me when I came to call it:

cscript.exe “\\DeploymentServer\DeploymentShare$\Scripts\ZTIVHDCreate.wsf” /DeploymentType:NEWCOMPUTER /VHDInputVariable:VHDTargetDisk /VHDOutputVariable:OSDDiskIndex /VHDCreateSizeMax:51200 /VHDCreateFileName:Win7 /VHDCreateSource:”\\DeploymentServer\DeploymentShare$\VHDs\VHDTest.vhd”

So with the addition of some extra parameters, we can give it a maximum file size – note that this is only for expandable VHDs, and give it a name.

It takes some time for the VHD to copy across depending on how large the file is.  Remember that if you run into any problems or if your VHD didn’t deploy as cleanly as you had expected, you can always restore the snapshot from Virtual Box Manager.

Step 4: Deploying a host OS and a VHD

So far we have deployed a working VHD to a host machine by manually calling on the ZTIVHDCreate.wsf script.  Now that we have had some success with that, it’s time to run it from within a Task Sequence along side the deployment of a host OS.

To do this we will use a standard client deployment TS, rather than either of the client/server VHD deployments.

This deployment is exactly the same as a regular OS deployment but with the addition of the VHD after the installation of the applications.

So, to get this working, we’re going to add the command line we created in the last step and add it as a step in the TS.  Firstly, we need to add a new command line item to the TS.

In the above screenshot you can see that I have added several command line items to the TS.  The second command line to be run is the command that we have previously created.  The first command line is the net use command that we ran from our test environment to map to the deployment share.  Depending on how your host OS was deployed, you may not need to use this command as you the deployment process may have used an admin account to map to the deployment share.  However, it was necessary for me as the deployment process is not done using my network admin account.

As you see from the screenshot below, the net use command is exactly the same as it was in step 2, but this time I have instructed the TS that I want the command run with a specific user account.  This is an account that is specifically used during deployment and is not my own, for security purposes.

Once you have added these two command line items to the TS, you should be able to deploy a fully functional VHD to a host at the same time as deploying the host OS!  Just one, optional, step remains…adding the VHD to the BCD Store.

Step 5:  Automating addition of VHD to BCD Store

I can take no credit for the script that automates this process.  I have shamelessly borrowed it from Dan Stolts.  You can find the full script at his blog.  Basically, all you need to do here is store the script somewhere on the deployment share, either in the scripts folder or in the same directory as the VHD itself.

Once stored on the share, we can add the third and final command line item to the TS.  The syntax of this command is as follows:

Path of script > path of LOCALLY stored VHD  >  Name of VHD to appear in the Boot menu.

Eg.

\\DeploymentServer\Deploymentshare$\VHDs\BCDEdit.bat C:\VHD\Win7.vhd Windows7

And that’s it!  Having followed these instructions, you should now have a dual boot machine that runs with a VHD that you already have kicking around.

This method may not be the most efficient and may be tweaked over the coming weeks as I continue to play around with it.  I welcome your comments if you think something has been glanced over or if you have a better way of achieving this.  It works for me and there currently isn’t any other guides for this on the net!  Good luck in your deployment!

This slideshow requires JavaScript.