J2i.Net

Nothing at all and Everything in general.

Building content threw InvalidOperationException D3DERR_NOTAVAILABLE

I was working on a Windows Phone XNA example earlier and decided to run it on my Xbox. After duplicating the project as an Xbox 360 project I kept running into the same error that seemed to have no explanation. 

 

<pre>Building content threw InvalidOperationException D3DERR_NOTAVAILABLE</pre>

 

<p>

It took a bit of time to figure out what was going on here. From doing a search on the Internet I found out that certain Direct3D programs cannot run at the same time as an XNA project. I don't fully understand why. But as it turns out the program that was causing me to experience this problem was the Zune client. Once I closed the Zune client I was able to compile and run my Xbox 360 program. Weird. 

</p>

Calculating Distance from GPS Coordinates

I've been carrying this equation around forever and a day and thought I would share it. With this equation you can calculate the distance between GPS coordinates. I tend to use SI units, but you should be able to easily adjust it for units of your choosing.

using System; 
using System.Device.Location; 
 
namespace J2i.Net.GPS 
    public static class DistanceCalculator 
    { 
 
        public const double EarthRadiusInMiles = 3956.0; 
        public const double EarthRadiusInKilometers = 6367.0; 
        public const double EarthRadiusInMeters = EarthRadiusInKilometers*1000; 
 
        public static double ToRadian(double val) { return val * (Math.PI / 180); } 
        public static double ToDegree(double val) { return val * 180 / Math.PI; } 
        public static double DiffRadian(double val1, double val2) { return ToRadian(val2) - ToRadian(val1); } 
 
 
 
        public static double CalcDistance(GeoCoordinate p1, GeoCoordinate p2) 
        { 
            return CalcDistance(p1.Latitude, p1.Longitude, p2.Latitude, p2.Longitude, EarthRadiusInKilometers); 
        } 
 
        public static Double Bearing(GeoCoordinate p1, GeoCoordinate p2) 
        { 
            return Bearing(p1.Latitude, p1.Longitude, p2.Latitude, p2.Longitude); 
        } 
 
        public static double CalcDistance(double lat1, double lng1, double lat2, double lng2, double radius) 
        { 
 
            return radius * 2 * Math.Asin(Math.Min(1, Math.Sqrt((Math.Pow(Math.Sin((DiffRadian(lat1, lat2)) / 2.0), 2.0) 
                + Math.Cos(ToRadian(lat1)) * Math.Cos(ToRadian(lat2)) * Math.Pow(Math.Sin((DiffRadian(lng1, lng2)) / 2.0), 2.0))))); 
        } 
 
        public static Double Bearing(double lat1, double lng1, double lat2, double lng2) 
        { 
 
            { 
                var dLat = lat2 - lat2; 
                var dLon = lng2 - lng1; 
                var dPhi = Math.Log(Math.Tan(lat2 / 2 + Math.PI / 4) / Math.Tan(lat1 / 2 + Math.PI / 4)); 
                var q = (Math.Abs(dLat) > 0) ? dLat / dPhi : Math.Cos(lat1); 
 
                if (Math.Abs(dLon) > Math.PI) 
                { 
                    dLon = dLon > 0 ? -(2 * Math.PI - dLon) : (2 * Math.PI + dLon); 
                } 
                //var d = Math.Sqrt(dLat * dLat + q * q * dLon * dLon) * R; 
                var brng = ToDegree(Math.Atan2(dLon, dPhi)); 
                return brng; 
            } 
        } 
 
    } 
 

Is My Media Locked?

If you've used the Media element on Windows Phone (or one of the other media related components) then you probably know that it won't work with the phone is connected to Zune. But Zune is needed for debugging. So how do you debug if part of your software package renders your phone half-functional while you are debugging?!  Well, you don't actually need to have Zune running to debug. There was a command line utility in the October update to the Windows Phone Developer Tools called WPConnect.exe. Upon connecting your phone to your computer Zune will open. Close it and run WPConnect.exe and you'll be able to deploy , run, and debug without your media library being crippled. 

But after distribution of your program it's still possible for a user to have their media functionality locked if they try to run the program you wrote while the phone is connected to Zune. You'll probably want to notify the user of what must be done to unlock the full functionality of your program. Eric Fleck of Microsoft had a suggestion that seems to work pretty well (original source here). In short he checks to see if the phone reports that it is connected to an Ethernet adapter. If it does then chances are it is connected to a computer with Zune. There are scenarios in which the phone could report that it is connected to an Ethernet adapter while the media file is not locked (ex: when connected using WPConnect.exe). The code is pretty simple:

 

        void CheckNetworkStatus()
        {
            if (NetworkInterface.GetIsNetworkAvailable())
            {
                if (NetworkInterface.NetworkInterfaceType == 
                    NetworkInterfaceType.Ethernet)
                {
                    MediaState = "Possibly locked, disconnect Zune";
                    return;
                }
            }
            MediaState = "All's Well! Media is available!";
        }

If you want a the code in project form you can find it here

Writing a Proper Wave File

Currently one of the reoccuring questions I see in the Windows Phone 7 forums deals with playing back data that was recorded from the Microphone. Often time developers will write the sound bytes that they receive from a microphone to a file and then try to export them for playback or play them back using the media classes on the phone only to find that the file can't be processed. During my lunch break today I had a chance to throw something together that I think will point those developers in the right direction.

Why Won't the File Play

The file won't play because none of the components or software to which it has has been given know anything about the file. If you record from the Microphone and dump the raw bytes to a file the things you are not writing include the sample rate, number of bits per sample, the file format, and so on. You need to prepend the file with all of these things for it to be usable by the media classes. Having done a quick Bing search I found a description of the needed header on https://ccrma.stanford.edu/courses/422/projects/WaveFormat/. Using that I put together a quick desktop application that produces a playable wave file. I targeted the desktop because the computer I'm using doesn't have the phone developer tools. But the code will pretty much be the same for the desktop as on the phone. The only difference will be in the creation of your file. While I am creating a file stream directly you would create a stream in isolated storage.

Simulating Audio Data

I need some data to write to my file. As is my preference I've created a function that will populate an array of bytes with the output of the Sine function. As it's parameters it takes the sample rate, the length of time that I want the sound to play, the wave's frequency, and it's magnitude (with 0 being the lowest magnitude and 1 being the greatest) and returns the data in a byte array. You would populate your array with the bytes from the recording instead. The code I used to do this follows.

public static byte[] CreateSinWave( 
        int sampleRate, 
        double frequency, 
        TimeSpan length, 
        double magnitude
    )
{
    int sampleCount = (int)(((double)sampleRate) * length.TotalSeconds);
    short[] tempBuffer = new short[sampleCount];
    byte[] retVal = new byte[sampleCount*2];
    double step = Math.PI*2.0d/frequency;
    double current = 0;
            
    for(int i=0;i<tempBuffer.Length;++i)
    {
        tempBuffer[i] = (short)(Math.Sin(current) * magnitude * ((double)short.MaxValue));
        current += step;
    }

    Buffer.BlockCopy(tempBuffer,0,retVal,0,retVal.Length);
    return retVal;
}

Populating the Wave Header

There are better ways to do this, much better ways. But I'm just trying to create something satisficing in a short period of time.

Trival Fact: Satisficing is a phrase coined by Herbert Simon to mean sufficiently satisfying. A satisficing solution may not be the best solution, but it get's the job done!

.

Looking on the chart that describes a wave header I wrote either literal bytes or calculated values, where the calculated values are based on sample rate, number of channels, and a few other factors. There's not a lot to say about it, but the code follows.

static byte[] RIFF_HEADER = new byte[] { 0x52, 0x49, 0x46, 0x46 };
static byte[] FORMAT_WAVE = new byte[] { 0x57, 0x41, 0x56, 0x45 };
static byte[] FORMAT_TAG  = new byte[] { 0x66, 0x6d, 0x74, 0x20 };
static byte[] AUDIO_FORMAT = new byte[] {0x01, 0x00};
static byte[] SUBCHUNK_ID  = new byte[] { 0x64, 0x61, 0x74, 0x61 };
private const int BYTES_PER_SAMPLE = 2;

public static void WriteHeader(
     System.IO.Stream targetStream, 
     int byteStreamSize, 
     int channelCount, 
     int sampleRate)
{

    int byteRate = sampleRate*channelCount*BYTES_PER_SAMPLE;
    int blockAlign = channelCount*BYTES_PER_SAMPLE;

    targetStream.Write(RIFF_HEADER,0,RIFF_HEADER.Length);
    targetStream.Write(PackageInt(byteStreamSize+44-8, 4), 0, 4);

    targetStream.Write(FORMAT_WAVE, 0, FORMAT_WAVE.Length);
    targetStream.Write(FORMAT_TAG, 0, FORMAT_TAG.Length);
    targetStream.Write(PackageInt(16,4), 0, 4);//Subchunk1Size    

    targetStream.Write(AUDIO_FORMAT, 0, AUDIO_FORMAT.Length);//AudioFormat   
    targetStream.Write(PackageInt(channelCount, 2), 0, 2);
    targetStream.Write(PackageInt(sampleRate, 4), 0, 4);
    targetStream.Write(PackageInt(byteRate, 4), 0, 4);
    targetStream.Write(PackageInt(blockAlign, 2), 0, 2);
    targetStream.Write(PackageInt(BYTES_PER_SAMPLE*8), 0, 2);
    //targetStream.Write(PackageInt(0,2), 0, 2);//Extra param size
    targetStream.Write(SUBCHUNK_ID, 0, SUBCHUNK_ID.Length);
    targetStream.Write(PackageInt(byteStreamSize, 4), 0, 4);
}

static byte[] PackageInt(int source, int length=2)
{
    if((length!=2)&&(length!=4))
        throw new ArgumentException("length must be either 2 or 4", "length");
    var retVal = new byte[length];
    retVal[0] = (byte)(source & 0xFF);
    retVal[1] = (byte)((source >> 8) & 0xFF);
    if (length == 4)
    {
        retVal[2] = (byte) ((source >> 0x10) & 0xFF);
        retVal[3] = (byte) ((source >> 0x18) & 0xFF);
    }
    return retVal;
}

That's pretty much all you need to know. To use the code I wrote a simple console mode program.

static void Main(string[] args)
{
    var soundData = WaveHeaderWriter.CreateSinWave(44000, 120, TimeSpan.FromSeconds(60), 1d);
    using(FileStream fs = new FileStream("MySound2.wav", FileMode.Create))
    {
        WaveHeaderWriter.WriteHeader(fs, soundData.Length, 1, 44100);
        fs.Write(soundData,0,soundData.Length);
        fs.Close();
    }
}

I opened the resulting output in Audacity and the results are what I expected.

And of course as a final test I double clicked on the file. It opened in Windows Media Player and played the Sine wave.

So there you have it, the program works! When I get a chance I will try to make a version of this in Windows Phone 7. Those of you that have WPDT without the full version of Visual Studio will not be able to compile this program directly. But the binary is included in the source code if you want to run it.

At the Next Atlanta Silverlight Meeting: WP7

I'll be speaking at the next Atlanta Silverlight Developer's Meeting. If you're in the Atlanta area stop by and say "Hi!". Here's the info.

When: Wednesday, October 27, 2010 6:30 PM
Where: Five Seasons Brewing

Windows Phone: How Did We Get Here and Where are We Headed?

On Wednesday, October 27th, Joel Johnson will be presenting on the past, present and future of Silverlight development on Windows Phone. We will meet at 6:30 pm at 5 Seasons Brewing at the Prado.

Bio

Joel Johnson is a Device Application Development MVP and is currently transitioning in the Windows Phone Development MVP program. He has extensive experience with Windows Mobile, Silverlight and XNA. He has also been the caretaker of one of the rare early Windows Phone devices for several months.

Abstract

With Microsoft's official WP7 launch with AT&T Monday, we should soon be seeing signs of the much anticipated Windows Phone marketing blitz. The Windows Phone marketplace is now open for early submissions and AT&T has announced three new phones which will become available in the US in early November.

Now that we are at the end of the year-long rush by Microsoft to get a phone out before Christmas, Joel will help us to take a moment to see how we got to this point. The Microsoft phone strategy was once guided by a desire for a feature rich device targeted at the enterprise. It is now a guided by a desire for a user-experience rich device targeted at consumers. Moreover, the old developer platform has not only been overhauled but completely replaced with a Silverlight + XNA development platform. Joel will show how these two technologies work together on the phone, demonstrating native XNA features as well as how we as Silverlight developers can tap into the XNA APIs to develop rich Silverlight applications for the Phone.

RSVP to this Meetup:
http://www.meetup.com/The-Atlanta-Silverlight-Meetup-Group/calendar/15095608/

Windows Phone 7 Launch Events

There's plenty of buzz in the air about Windows Phone 7. If you are interested in WP7 then you'll be interested in the following events. 

Monday 11 October at 9:30 AM EDT you can watch the Windows Phone 7 launch event live! Here's the URL for the streaming: http://www.microsoft.com/presspass/presskits/windowsphone/ There's no telling what type of new information we'll hear at the announcement. 

 

The other are the Windows Phone 7 launch events. I've for the information for the events in the USA. If you are in one of the nations in which it will be launced this year you may want to check to see if there are events in your area. The events are free two day events. There will be real Windows Phone 7 devices at the events and plenty of new information on what's coming.

 

Day #

Date

City

State

Venue & Registration Link

Day 1

12-Oct

Boston

MA

Royal Sonesta Hotel Boston

Day 2

13-Oct

Boston

MA

Royal Sonesta Hotel Boston

Day 1

12-Oct

Detroit

MI

Westin Book Cadillac Hotel

Day 2

13-Oct

Detroit

MI

Westin Book Cadillac Hotel

Day 1

12-Oct

Mountain View

CA

Microsoft Silicon Valley Office

Day 2

13-Oct

Mountain View

CA

Microsoft Silicon Valley Office

Day 1

19-Oct

Chicago

IL

Swissôtel Chicago

Day 2

20-Oct

Chicago

IL

Swissôtel Chicago

Day 1

19-Oct

New York

NY

Marriott Marquis

Day 2

20-Oct

New York

NY

Marriott Marquis

Day 1

20-Oct

Dallas

TX

InterContinental Hotel

Day 2

21-Oct

Dallas

TX

InterContinental Hotel

Day 1

20-Oct

San Francisco

CA

San Fran Design Center

Day 2

21-Oct

San Francisco

CA

San Fran Design Center

Day 2

22-Oct

Atlanta

GA

Georgia World Congress Center

Using DynameicSoundEffectInstance

Download the Code (93.1 KB)

After an Atlanta Silverlight Users meeting I was eating with a couple of other MVPs and we were talking about the things we were doing and would like to do with Windows Phone 7. I had mentioned I would like to have direct access to the sound buffer used in XNA. James Ashley immediatly responded with "DynamicSoundEffectInstance!" At the time James had never used it, and I had just discovered it, so I needed to get some more information on how it works. So that night a stayed up a little later than usual so that I could figure out how it works. With the documentation for the method still being in early form I didn't quite find everything that I wanted to know but was able to figure it out.

In writing this I'm going to assume that you know a bit about the mechanics about sound and speakers work. If not you'll want to read the Wikipedia article on Digital to Analog converters.

In this article I simply want to get to the point of being able to play a tone and control it's frequency. From a high level view this is what we will need to do:

 

  1. Create a few byte buffers that will hold the next part of the sound to be played
  2. Populate one of the byte buffers with the wave form to be played
  3. Give the buffer to a DynamicSoundEffectInstance
  4. Tell the SoundEffectInstance to start playing
  5. In response to the BufferNeeded event populate the next buffer and submit it
  6. Goto step 5
Now to convert those steps into something more concrete. Let's start with allocating the buffers. 

Creating the Buffer

The size of the buffer you choose is largely going to be driven by what type of latency you want your sounds to have and the desired quality of the sound you are generating. In general low latency is good. With low latency as there is less of a time difference from when your program generates a sound to when the user hears it. If you made a program to simulate a piano you would want low latency so that the user perceives that the device is playing sound as soon as they press a key on the screen. Naturally you will also want high quality.  But there are trade-offs as you aim for higher quality and lower latency just as there are trade-offs in aiming for low quality and high latency. 

To produce higher quality sounds you will need a higher sample rate. If you raise the sample rate used to play back a sound then you will either need to increase the size of your buffer (so more memory is being consumed) or you will need to populate and supply smaller buffers more frequently (so more CPU time is being consumed). While lower quality uses less memory and less CPU time the negative part is evident; your program won't sound as good. If you were aiming for lower latency you will need to use smaller buffers but that will also mean that the DynamicSoundEffectInstance is requesting new buffers more often (once again more CPU time). My suggestion for the quality of a sound is to aim for something that is good enough. Don't start off at the 48KHz sample rate. Start instead at around 22KHz or lower and see how well that works for your. As for latency with an XNA program aim for a latency that determined by the FPS of your game. If your game is made to run at 30 frames per second then make buffers that are big enough to play 1/30 seconds of sound. A sound can also be in stereo or mono. It goes without saying that twice the memory is needed to generate a sound in stereo than mono.

Let's for now assume that we are creating a DynamicSoundEffectInstance with a sample rate of 22KHz in mono. We could instantiate one with the following:


 

var dynamicSoundEffectInstance = new DynamicSoundEffectInstance(22000,AudioChannels.Mono);

We can calculate the size of the buffers in one of two ways. The DynamicSoundEffectInstance always play 16-bit sound samples (2 bytes). If I wanted to be able to play 1/30th seconds of sound at a 22KHz sample rate the number of bytes needed for this buffer would be 22000*(1/30)*2*1 = 1466. The last two numbers in the equation (2*1) are the number of bytes in a sample multiplied by the number of channels to be played. Were I playing a stereo sample the second number would have been 2 instead of 1. I could have instead asked the DynamicSoundEffectInstance to calculate the size of the needed buffer.

22000*(1d/30d)*dynamicSoundEffectInstance.GetSampleSizeInBytes(TimeSpan.FromSeconds(1d/30d))

Populate the Buffer

The data that you put into buffer is derived from the sound that you are playing. If you've been astutely reading you may have noticed that I've stated that DynamicSoundEffectInstance consumes an array of bytes (8-bits) but the audio must be composed of 16-bit samples. In C++ one might just pass an array to what ever held the data. It would let you do that, even if doing that made no sense. In the C# language one can also do that by wrapping their code in an unsafe block. But many feel that code wrapped in unsafe blocks is potentially not safe (I wonder why). Silverlight won't let you do such th ings. So it's necessary to convert your 16-bit data to byte data using other means. There's a method available for doing so but I'll also describe how to do so manually.

A 16-bit (two byte) number has a high order byte and a low order byte. High and Low order could also be taken to be more significant and less significant. In the decimal number 39 the three is in a more significant position than the nine; it has more of an impact on the final value. The same concept transfers to numbers composed of bytes. Our bytes need to have little endian ordering. The low order byte will need to be placed in our array before the high order byte. The low order byte can be singled out with a bit mask. The high order byte with bit shifting.

byte lowOrder = (byte)(SomeNumber & 0xFF);
byte highOrder = (byte)(SomeNumber >> 0x08); 

Now that you know what needs to be done, here's the utility method that will essentially do the same thing.

Buffer.BlockCopy(
                   sourceBuffer
                 , sourceStartIndex
                 , destinationBuffer
                 , destinationStartIndex
                 , ByteCount)

The sourceBuffer element in this case would be the array of 16-bit integers. The destinationBuffer would be the destination byte buffer. Two things to note. First, the destination buffer must have twice the number of elements as the source buffer (since bytes are half the size of short integers). Second, the last argument is the number of bytes to be copied and not the number of elements. If you get this wrong you'll either get an IndexOutOfRange exception or something that sounds pretty bad.

Start Playing the Sound

Once the DynamicSoundEffectInstance has a buffer I call Play to get things rolling.

Submitting the Buffers to the DynamicSoundEffectInstance

The DynamicSoundEffectInstance has an event called BufferNeeded that will be called when the object is ready for more sound data to be played. If you are making an XNA program you may want to avoid the object getting to the point where it needs to call this. You can reduce overhead by feeding the class data at the same rate at which it is consuming it. This can be easily done by making the buffers big enough to play as much sound as can be played in one cycle of your game loop. If you are making a Silverlight application you'll be making use of this event.From what I've found the DynamicSoundEffectInstance class will hold up to two buffers; playing from one, and has the other in place to be played next. So I prefer to make three buffers so that I have a third buffer into which I can render the next block of sound. When the BufferNeeded event is called it populate and pass the buffer through the SubmitBuffer method. I use the same buffers in a round robin fashion.

FrameworkDispatcher.Update()

This is only needed if you are using the class from within Silverlight. FrameworkDispatcher.Update will need to be called at least once before playing your sound and must continue to be called periodically. The Windows Phone documentation already walks one through a class that will do this. Take a look at this article to see how this class works.

My Sound Function and Controlling the Frequency

While the sound data passed to DynamicSoundEffectInstance must be signed 16-bit integers I wanted to keep my sound generating functions decoupled from this constraint and also decouple from the specific frequency that was being played. I ach ieved these goals in a class I've created named SoundManager. While SoundManager contains the code to generate a sin wave the actual sound function used is assigned to the property SoundFunction. One only needs to assign a different function to this property to generate a different sound.

To decouple from the function from the data format I've created my program so that it expects the sound function to return it's data as a double. The value range returned by the sound function should be in the range of [-1..1]. I'm not doing range checking to avoid the overhead (so if you use my code it's up to you to make sure your code behaves). The function consumes two parameters: a double value to represent time and an integer value to represent channel. Channel would presumably be 0 for the left channel and 1 for the right channel. For generating mono sound this parameter can be ignored. The time parameter indicate which part of the cycle of a sound wave is being requested. The values returned by the sound function from the 0 to 1 would be for one cycle of the sound. From 1 to 2 would be for the second value of the sound, and so on. since the time parameter is being used to represent the position within a cycle instead of actual time the sound function is insulated from the actual frequency being generated. I can change the frequency of the sound being played by increasing or decreasing the intervals between the time values passed. Shorter intervals will lead to lower frequencies. Larger intervals will lead to higher frequencies. Note that the highest frequency that you can create is going to be no higher than half the sample rate. So with a 22 KHz sample rate you would only be able to generate sounds with frequency components as high as 11 KHz. Given that most sounds we hear are a complex mixture of sound components  keep in mind that there may be some frequency components higher than what may be recognized as the dominant frequency. So playing such sounds at a high frequency could result in some of the higher frequency components being stripped out. You can find more information on this concept under the topic Nyquist Rate

 The method FillBuffer will call this function for each sample that it needs to fill the next buffer.

double MySin(double time, int channel) { return Math.Sin(time*Math.PI*2.0d); }

The code for filling the sound buffer is as follows

        void FillBuffer()
        {
            if (SoundFunction == null)
                throw new NullReferenceException("SoundFunction");
            byte[] destinationBuffer = _audioBufferList[CurrentFillBufferIndex];
            if (++CurrentFillBufferIndex >= _audioBufferList.Length)
                CurrentFillBufferIndex = 0;
            short result;
            int currentBufferIndex = 0;
            int deltaBufferIndex = ChannelCount * BytesPerSample;

            for (int i = 0; i < destinationBuffer.Length / (ChannelCount * BytesPerSample); ++i)
            {
                int baseIndex = ChannelCount * BytesPerSample * i;
                //currentBufferIndex = 0;
                for (int c = 0; c < ChannelCount; ++c)
                {
                    result = (short)(MaxWaveMagnitude * SoundFunction(_Time, c));

                    #if(MANUAL_COPY)
                    destinationBuffer[baseIndex + currentBufferIndex] = (byte)(0xFF & result);
                    destinationBuffer[baseIndex + currentBufferIndex] = (byte)(0xFF & (result >> 0x8));
                    currentBufferIndex += deltaBufferIndex;
                    #else
                    _renderingBuffer[i * ChannelCount + c] = result;
                    #endif                    
                    
                }
                _Time += _deltaTime;
            }
            #if(!MANUAL_COPY)
            Buffer.BlockCopy(_renderingBuffer, 0, destinationBuffer, 0, _renderingBuffer.Length*sizeof(short));
            #endif
            OnPropertyChanged("Time");
            OnPropertyChanged("PendingBufferCount");
        }

If you deploy the code attached to this entry you'll have a program that can play a Sin wave. Pretty boring, I know. But I wanted to keep the sound that I was playing in this first mention of DynamicSoundEffectInstance simple. The next time I mention it I want to talk about generating more complex sounds and will probably say little about using the class itself outside of referencing this entry. 

IE9 Rocks!

And now for some off topic comments!

I've been using IE9 Beta since it was released to public beta and I have to say it rocks! I've installed it on most of my machines and have to say that it looks clean and runs fast. The interface has as many buttons as needed and no more. And the things you can do with HTML 5 are awesome!

What Happened to the WP7 Icon Pack?

In case you were looking for the Windows Phone 7 Icon Pack and noticed all the links to it on the Microsoft Download site are dead, don't worry, getting the icons is easier than you might think. The Icon Pack is now part of Expressions Blend for Windows Phone. When you are working with the application bar and add items you can change the icon used using a drop down in Expressions Blend for Windows Phone. When you select an icon it is automatically added to your project. If you want to get to the icons to work with them yourself you can find them on your drive in C:\Program Files\Microsoft SDKs\Windows phone\v7.0\Icons.