J2i.Net

Nothing at all and Everything in general.

Adjusting Microsoft Translator WAVE Volume

Video Entry
Download Code (32 Kb)

 

The code in this article was inspired by some questions on Windows Phone 7, but it's generic enough to be used on other .Net based platforms. In the Windows Phone AppHub forums there was a question about altering the volume of the WAVE file that the Microsoft translator service returns. In the StackOverflow forums there was a question about mixing two WAVE files together. I started off working on a solution for the volume question and when I stepped back to examine it I realized I wasn't far away from a solution for the other question. So I have both solutions implemented in the same code. In this first post I'm showing what I needed to do to alter the volume of the WAVE stream that comes from the Microsoft Translation service.

I've kept the code generic enough so that if you want to apply other algorithms to the code you can do so. I've got some ideas on how the memory buffer for the sound data can be better handled that would allow large recordings to be manipulated without keeping the entire recording in memory and allowing the length of the recording to be more easily altered.  But the code as presented demonstrates three things:

  1. Loading a WAVE file from a stream
  2. Alter the WAVE file contents in memory
  3. Save WAVE files files back to a stream

The code for saving a WAVE file is a modified version of the code that I demonstrated some time ago for writing a proper WAVE file for the content that comes from the Microphone buffer.

Prerequisites

I'm making the assumption that you know what a WAVE file and a sample are.I am also assuming that you know how to use the Microsoft Translator web service.

Loading a Wave File

The formats for WAVE files is pretty well documented. There's more than one encoding that can be used in WAVE files, but I'm concentrating on PCM encoded WAVE files and will for now ignore all of the other possible encodings. The document that I used can be found here.  There are a few variants from the document that I found when dealing with real WAVE files and I'll comment on those variants in a moment. In general most of what you'll find in the header are 8, 16, and 32-bit integers and strings. I read the entire header into a byte array and extract the information from that byte array into an appropriate type. To extract a string from the byte array you need to know the starting index for the string and the number of characters it contains. You can then use Encoding.UTF8.GetString to extract the string. If you understand how numbers are encoded (little endian) decoding them is fairly easy. If you want to get a better understanding see the Wikipedia article on the encoding.

Integer Size Extraction Code
8-bit data[i]
16-bit (data[i])|(data[i+1]<<0x08)
32-bit (data[i])|(data[i+1]<<0x08)|(data[i+2]<<0x10)|(data[i+3]<<0x18)

Offset Title Size Type Description
0 ChunkID 4 string(4) literal string "RIFF"
4 ChunkSize 4 int32 Size of the entire file minus eight bytes
8 Format 8 string(4) literal string "WAVE"
12 SubChunkID 4 string(4) literal string "fmt "
16 SubChunk1Size 4 int32 size of the rest of the subchunk
20 AudioFormat 2 int16 Should be 1 for PCM encoding. 
22 Channel Count 2 int16 1 for mono, 2 for stereo,...
24 SampleRate 4 int32  
28 ByteRate 4 int32 (SampleRate)*(Channel Count)*(Bits Per Sample)/8
32 Block Align 2 int16 (Channel Count)*(Bits Per Sample)/8
34 BitsPerSample 2 int16  
  ExtraParamSize 2 int16 possibly not there
  ExtraParams ? ? possibly not there
36+x SubChunk2ID 4 int32 literal string "data"
40+x SubChunk2Size 4 int32  
44+x data SubChunk2Size byte[SubChunk2Size]  
         

The header will always be at least 44 bytes long. So I start off reading the first 44 bytes of the stream. The SubChunk1Size will normally contain the value 16. If it's greater than 16 then the header is greater than 44 bytes and I read the rest. I've allowed for a header size of up to 64 bytes (which is much larger than I have encountered). A header size of larger than 44 bytes will generally mean that there is an extra parameter at the end of SubChunk1. For what I'm doing the contents of the extra parameters don't matter. But I still need to account for the space that they consume to properly read the header.

To my surprise the contents of the fields in the header are not always populated. Some audio editors leave some of the fields zeroed out. My first attempt to read a WAVE file was with a file that came from the open source audio editor Audacity. Among other fields the BitsPerSample field was zeroed. I'm not sure if this is allowed by the format or not. It certainly is not in any of the spec sheets that I've found. But when I encounter this I assume a value of 16.

Regardless of whether a WAVE file contains 8-bit, 16-bit-, or 32-bit samples when read in I store the value in an array of doubles. I chose to do this because double works out better for some of the math operations I have in mind.

public void ReadWaveData(Stream sourceStream, bool normalizeAmplitude = false)
{
    //In general I should only need 44 bytes. I'm allocating extra memory because of a variance I've seen in some WAV files. 
    byte[] header = new byte[60];
    int bytesRead = sourceStream.Read(header, 0, 44);
    if(bytesRead!=44)
        throw new InvalidDataException(String.Format("This can't be a wave file. It is only {0} bytes long!",bytesRead));

    int audioFormat = ChannelCount = (header[20]) | (header[21] << 8);
    if (audioFormat != 1)
        throw new Exception("Only PCM Waves are supported (AudioFormat=1)");

    #region mostless useless code
    string chunkID = Encoding.UTF8.GetString(header, 0, 4);
    if (!chunkID.Equals("RIFF"))
    {
        throw new InvalidDataException(String.Format("Expected a ChunkID of 'RIFF'. Received a chunk ID of {0} instead.", chunkID));
    }
    int chunkSize = (header[4]) | (header[5] << 8) | (header[6] << 16) | (header[7] << 24);
    string format = Encoding.UTF8.GetString(header, 8, 4);
    if (!format.Equals("WAVE"))
    {
        throw new InvalidDataException(String.Format("Expected a format of 'WAVE'. Received a chunk ID of {0} instead.", format));
    }
    string subChunkID = Encoding.UTF8.GetString(header, 12, 4);
    if (!format.Equals("fmt "))
    {
        throw new InvalidDataException(String.Format("Expected a subchunkID of 'fmt '. Received a chunk ID of {0} instead.", subChunkID));
    }
    int subChunkSize = (header[16]) | (header[17] << 8) | (header[18] << 16) | (header[19] << 24);
    #endregion

    if (subChunkSize > 16)
    {
        var bytesNeeded = subChunkSize - 16;
        if(bytesNeeded+44 > header.Length)
            throw new InvalidDataException("The WAV header is larger than expected. ");
        sourceStream.Read(header, 44, subChunkSize - 16);
    }

    ChannelCount = (header[22]) | (header[23] << 8);
    SampleRate = (header[24]) | (header[25] << 8) | (header[26] << 16) | (header[27] << 24);
    #region Useless Code
    int byteRate = (header[28]) | (header[29] << 8) | (header[30] << 16) | (header[31] << 24);
    int blockAlign = (header[32]) | (header[33] << 8);
    #endregion
    BitsPerSample = (header[34]) | (header[35] << 8);

    #region Useless Code
    string subchunk2ID = Encoding.UTF8.GetString(header, 20 + subChunkSize, 4);
    #endregion

    var offset = 24 + subChunkSize;
    int dataLength = (header[offset+0]) | (header[offset+1] << 8) | (header[offset+2] << 16) | (header[offset+3] << 24);

    //I can't find any documentation stating that I should make the following inference, but I've
    //seen wave files that have 0 in the bits per sample field. These wave files were 16-bit, so 
    //if bits per sample isn't specified I will assume 16 bits. 
    if (BitsPerSample == 0)
    {
        BitsPerSample = 16;
    }

    byte[] dataBuffer = new byte[dataLength];

    bytesRead = sourceStream.Read(dataBuffer, 0, dataBuffer.Length);


    Debug.Assert(bytesRead == dataLength);


    if (BitsPerSample == 8)
    {
        byte[] unadjustedSoundData = new byte[dataBuffer.Length / (BitsPerSample / 8)];
        Buffer.BlockCopy(dataBuffer, 0, unadjustedSoundData, 0, dataBuffer.Length);

        SoundData = new double[unadjustedSoundData.Length];
        for (var i = 0; i < (unadjustedSoundData.Length); ++i)
        {
            SoundData[i] = 128d*(double)unadjustedSoundData[i];
        }

    }
    if (BitsPerSample == 16)
    {
        short[] unadjustedSoundData = new short[dataBuffer.Length / (BitsPerSample / 8)];
        Buffer.BlockCopy(dataBuffer, 0, unadjustedSoundData, 0, dataBuffer.Length);


        SoundData = new double[unadjustedSoundData.Length];
        for (var i = 0; i < (unadjustedSoundData.Length); ++i)
        {
            SoundData[i] = (double) unadjustedSoundData[i];
        }
    }
    else if(BitsPerSample==32)
    {
        int[] unadjustedSoundData = new int[dataBuffer.Length / (BitsPerSample / 8)];
        Buffer.BlockCopy(dataBuffer, 0, unadjustedSoundData, 0, dataBuffer.Length);

        SoundData = new double[unadjustedSoundData.Length];
        for (var i = 0; i < (unadjustedSoundData.Length); ++i)
        {
            SoundData[i] = (double)unadjustedSoundData[i];
        }
    }

    Channels = new PcmChannel[ChannelCount];
    for (int i = 0; i < ChannelCount;++i )
    {
        Channels[i]=new PcmChannel(this,i);
    }
        if (normalizeAmplitude )
            NormalizeAmplitude();

}

Mono vs Stereo

In a mono (single channel) file the samples are ordered one after another, no mystery there. For stereo files the data stream will contain the first sample for channel 0, then the first sample for channel 1, then the second sample for channel 0, second sample for channel 1, and so on. Every other sample will be for the left channel or right channel. The sample data is stored in memory in the same way. in an array called SampleData. To work exclusively with one channel or the other there is also a property named Channels (of type PcmChannel) that can be used to access that one channel.

public class PcmChannel
{
    internal PcmChannel(PcmData parent, int channel)
    {
        Channel = channel;
        Parent = parent;
    }
    protected PcmData Parent { get; set;  }
    public int Channel { get; protected set; }
    public int Length
    {
        get { return (int)(Parent.SoundData.Length/Parent.ChannelCount);  }
    }
    public double this[int index]
    {
        get { return Parent.SoundData[index*Parent.ChannelCount + Channel]; }
        set { Parent.SoundData[index*Parent.ChannelCount + Channel] = value; }
    }
}

//The following is a simplified interface definition for how the PcmChannel
//data type is relevant to our PCM data. The actual PcmData class has more 
//more members than what follows.
public class PcmData
{
   public double[] SoundData { get; set; }
   public int ChannelCount { get; set; }
   public PcmChannel[] Channels { get; set; }
}

Where's 24-bit support

Yes, there do exists 24-bit WAVE files. I'm not supporting them (yet) because there's more code required to handle them and most of the scenarios I have in mind are going to use 8 and 16-bit files. Adding support for 32-bit files was only 5 more lines of code. I'll be handing 24-bit files in a forthcoming code.

Altering the Sound Data

Changes made to the values in the SoundData[] array will alter the sound data. There are some constrains on how the data can be modified. Since I'm writing this to a 16-bit WAVE file the maximum and minimum values that can be written out are 32,768 and -32,767. The double data type has a range significantly larger than this. The properties, AdjustmentFactor and AdjustmentOffset are used to alter the sound data when it is being prepared to be written back to a file. They are used to apply a linear transformation to the sound data (remember y=mx+b?). Finding the right values for these is done for you through the NormalizeAmplitude method. Calling this method after you've altered your sound data will result in appropriate values being chose. By default this method will try to normalize the sound data to 99% of maximum amplitude. You can pass an argument to this method between the values of 0 and 1 for some other amplitude.

public void NormalizeAmplitude( double percentMax = 0.99d)
{
    var max = SoundData.Max();
    var min = SoundData.Min();

    double rangeSize = max - min+1 ;
    AdjustmentFactor = ((percentMax * (double)short.MaxValue) - percentMax * (double)short.MinValue) / (double)rangeSize;
    AdjustmentOffset = (percentMax * (double)short.MinValue) - (min * AdjustmentFactor);

    int maxExpected = (int)(max * AdjustmentFactor + AdjustmentOffset);
    int minExpected = (int)(min * AdjustmentFactor + AdjustmentOffset);
}

Saving WAVE Data

To save the WAVE data I'm using a variant of something I used to save the stream that comes from the Microphone. The original form of the code had a bug that makes a difference when working with a stream with multiple channels. The microsphone produces a single channel stream and wasn't impacted by this bug (but it's fixed here). The code for writing the wave produces a header from the parameters it is given, then it writes out the WAVE data. The WAVE data must be converted from the double[] array to a byte[] array containing 16-bit integers in little endian format.

public class PcmData
{
    public void Write(Stream destinationStream)
    {
        byte[] writeData = new byte[SoundData.Length*2];
        short[] conversionData = new short[SoundData.Length];

        //convert the double[] data back to int16[] data
        for(int i=0;i<SoundData.Length;++i)
        {
            double sample = ((SoundData[i]*AdjustmentFactor)+AdjustmentOffset);
            //if the value goes outside of range then clip it
            sample = Math.Min(sample, (double) short.MaxValue);
            sample = Math.Max(sample, short.MinValue);
            conversionData[i] = (short) sample;
        }
        int max = conversionData.Max();
        int min = conversionData.Min();
        //put the int16[] data into a byte[] array
        Buffer.BlockCopy(conversionData, 0, writeData, 0, writeData.Length);

        WaveHeaderWriter.WriteHeader(destinationStream,writeData.Length,ChannelCount,SampleRate);
        destinationStream.Write(writeData,0,writeData.Length);
    }
}

public class WaveHeaderWriter
{
    static byte[] RIFF_HEADER = new byte[] { 0x52, 0x49, 0x46, 0x46 };
    static byte[] FORMAT_WAVE = new byte[] { 0x57, 0x41, 0x56, 0x45 };
    static byte[] FORMAT_TAG = new byte[] { 0x66, 0x6d, 0x74, 0x20 };
    static byte[] AUDIO_FORMAT = new byte[] { 0x01, 0x00 };
    static byte[] SUBCHUNK_ID = new byte[] { 0x64, 0x61, 0x74, 0x61 };
    private const int BYTES_PER_SAMPLE = 2;

    public static void WriteHeader(
            System.IO.Stream targetStream,
            int byteStreamSize,
            int channelCount,
            int sampleRate)
    {

        int byteRate = sampleRate * channelCount * BYTES_PER_SAMPLE;
        int blockAlign =  BYTES_PER_SAMPLE;

        targetStream.Write(RIFF_HEADER, 0, RIFF_HEADER.Length);
        targetStream.Write(PackageInt(byteStreamSize + 36, 4), 0, 4);

        targetStream.Write(FORMAT_WAVE, 0, FORMAT_WAVE.Length);
        targetStream.Write(FORMAT_TAG, 0, FORMAT_TAG.Length);
        targetStream.Write(PackageInt(16, 4), 0, 4);//Subchunk1Size    

        targetStream.Write(AUDIO_FORMAT, 0, AUDIO_FORMAT.Length);//AudioFormat   
        targetStream.Write(PackageInt(channelCount, 2), 0, 2);
        targetStream.Write(PackageInt(sampleRate, 4), 0, 4);
        targetStream.Write(PackageInt(byteRate, 4), 0, 4);
        targetStream.Write(PackageInt(blockAlign, 2), 0, 2);
        targetStream.Write(PackageInt(BYTES_PER_SAMPLE * 8), 0, 2);
        //targetStream.Write(PackageInt(0,2), 0, 2);//Extra param size
        targetStream.Write(SUBCHUNK_ID, 0, SUBCHUNK_ID.Length);
        targetStream.Write(PackageInt(byteStreamSize, 4), 0, 4);
    }

    static byte[] PackageInt(int source, int length = 2)
    {
        if ((length != 2) && (length != 4))
            throw new ArgumentException("length must be either 2 or 4", "length");
        var retVal = new byte[length];
        retVal[0] = (byte)(source & 0xFF);
        retVal[1] = (byte)((source >> 8) & 0xFF);
        if (length == 4)
        {
            retVal[2] = (byte)((source >> 0x10) & 0xFF);
            retVal[3] = (byte)((source >> 0x18) & 0xFF);
        }
        return retVal;
    }
}

Using the Code

Once you've gotten the wave stream only a few lines of code are needed to do the work. For the example program I am downloading a spoken phrase from the Microsoft Translation service, amplifying it, and then writing both the original and amplified versions to a file.

static void Main(string[] args)
{
    PcmData pcm;

    //Download the WAVE stream
    MicrosoftTranslatorService.LanguageServiceClient client = new LanguageServiceClient();            
    string waveUrl = client.Speak(APP_ID, "this is a volume test", "en", "audio/wav","");
    WebClient wc = new WebClient();
    var soundData = wc.DownloadData(waveUrl);

          
    //Load the WAVE stream and let it's amplitude be adjusted to 99% maximum
    using (var ms = new MemoryStream(soundData))
    {
        pcm = new PcmData(ms, true);               
    }

    //Write the amplified stream to a file
    using (Stream s = new FileStream("amplified.wav", FileMode.Create, FileAccess.Write))
    {
        pcm.Write(s);
    }

    //write the original unaltered stream to a file
    using (Stream s = new FileStream("original.wav", FileMode.Create, FileAccess.Write))
    {
        s.Write(soundData,0,soundData.Length);
    }
}

The End Result

The code works as designed, but I found a few scenarios that can make it ineffective. One scenario is that not all phones have the same response frequency for their speakers. Frequencies that comes through loud and clear on one phone may come through sounding quieter on another. The other scenario is that the source files may have a sample that goes to the maximum or minimum reading even though a majority of the other samples may come no where near to the same level of amplitude. When this occurs the spurious sample will limit the amount of amplification that is applied to the file. I opened an original and amplified WAVE file in audacity to see my results and I was pleased to see that the amplified WAVE does actually look louder when I view it's graph in audacity.

Part 2 - Overlaying Wave Files

The other problem that this code can solve is combining wave files together in various ways. I'll be putting that up in the next post. Between now and then I've got a presentation at the Windows Phone Developers Atlanta meeting this week (if you are in the Atlanta area come on out!) and will get back to this code after the presentation.

Passing thoughts, video effects on Windows Phone

Some years ago I saw the movie "A Scanner Darkly." There isn't much to talk about as much as the plot goes, but the visuals of the movie were unique. The movie was done with real actors but it the look of everything was as though it were drawn like a cartoon.

I thought about making an application that would allow some one to produce a similar effect in real time (or close to it) using a phone's camera. I thought I would be able to implement it with the K-means algorithm operating within color space (I will do another post on the details of this). Before diving into this task I needed to make sure that the phone was capable of doing this. I started by taking a look at Windows Phone and these were the main things that I needed to be able to satisfy:

  • Is real time access to the camera available
  • Can I render video frames to the screen at a rate
  • Can the phone provide the computational capability to quickly do the image processing

One of the new capabilities that comes with the Mango update to Windows phone is access to the camera. In addition to getting information from the camera through tasks (which was available with the initial release of Windows Phone) Microsoft has granted developers the ability to paint a surface with a live feed from the camera, capture a video from the camera, capture a frame from the preview buffer, and take a photograph (without user interaction) from the camera. Let's examine how each one of those features does or does not contribute towards my goal and the program design.

Because of the the nature of my goal (to work with video) the Windows Phone Tasks (Camera Capture and Photo Chooser) won't work for my program. They both require user interaction for each frame captured. That's no way to work with video.

What about taking pictures automatically? This doesn't quire work either. Picture taking is slow. In general you'll find that the CCDs used in many digital devices are now able to capture and transmit the information from a full resolution photograph as quickly as they do when sending lower resolution video.

The ability to display the video buffer on screen looks promising. With it you can display what ever the camera sees. However this capability is only for displaying the camera's "vision" on the screen and rendering over it (such as in augmented reality).

This leaves two methods left: using the preview buffer and using the phone's video capturing abilities. Using the phone for video capture gives the highest framerate but it ceases to be real time. I'd be fine with that. That would just mean that some one would need to film a video and then it would play back with the video affect applied. But that would also require that I decode the resulting MP4 video myself (there's no video codec available to do this). So the preview buffer seemed like the best option. So I did a quick test to see how many frames I could capture per second (before performing any processing).

 

public MainPage()
{
    InitializeComponent();
    _camera = new PhotoCamera();
    _camera.Initialized += new EventHandler<CameraOperationCompletedEventArgs>(_camera_Initialized);            
    videoBrush.SetSource(_camera);                 
}

void _camera_Initialized(object sender, CameraOperationCompletedEventArgs e)
{
    var x = _camera.PreviewResolution;
    int pixelCount = (int) (x.Width*x.Height);
    buffer = new int[pixelCount];
    _camera.PreviewResolution
    Dispatcher.BeginInvoke(() => { });

    Thread ts = new Thread(new ThreadStart(GrabFrames));
    ts.Start();
}

void GrabFrames()
{
    _camera.GetPreviewBufferArgb32(buffer);
    var startDate = DateTime.Now;
    for(int i=0;i<100;++i)
    {
        _camera.GetPreviewBufferArgb32(buffer);
    }
    var endTime = DateTime.Now;
    var delta = endTime.Subtract(startDate);
}

The results I got back on a Mango Beta HD7 worked out to 10 frames per second. Not quite real time video. So it looks like my best option is to go with the MP4 video recorder. I'll have to figure out how to read frames from an MP4 file.

I'm glad I was able to figure that out before writing a substantial amount of code or doing a substantial amount of design.

Mango Beta 2 Available for Phones Today!

The Beta 2 Mango Windows Phone Tools are available to developers today! Included with the beta is the ability for developers registered with the AppHub to flash their retail devices.

I know there are some non-developers out there that want to also flash their phones and they may wonder how they get get their phones reflashed with the Mango beta. For the time being they cannot. There is an inherent risk in reflashing the phone; you could end up with a bricked phone if something goes bad. If this happens Microsoft has budgeted to take care of repairing up to one phone per developer. But Microsoft doesn't see this risk as being appropriate for user audiences. [Some] developers on the other hand are willing to risk their device's life and limb to have early access to something new. If you brick your device today Microsoft won't be prepared to act on it for another couple of weeks. That's not the best case scenario. But the alternative was to wait another couple of weeks before releasing the Mango tools. If you don't feel safe walking the tight rope without a safety net then don't re-flash your device yet.

According to the Windows Phone Developer site if you are a registered developer you will receive an e-mail inviting you to participate in early access to Mango.

Changing the Pitch of a Sound

I got a tweet earlier today from some one asking me how to change the pitch of a wave file. The person asking was aware that SoundEffectInstance has a setting to alter pitch but it wasn't sufficient for his needs. He needed to be able to save the modified WAV to a file. It's something that is easy to do. So I made a quick example

Video Example

I used a technique that comes close to matching linear interpolation. It get's the job done but isn't the best technique because of the opportunity for certain types of distortion to introduced. Methods with less distortion are available at the cost of potentially more CPU cycles. For the example I made no matter what the original sample rate was I am playing back at 44KHz and adjusting my interpolation accordingly so that no unintentional changes in pitch are introduced.

To do the work I've created a class named AdjustedSoundEffect. It has a Play() method that takes as it's argument the factor by which the pitch should be adjusted where 1 plays the sound at the original pitch, 2 plays it at twice its pitch, and 0.5 plays it at half its pitch.

If you are interested the code I used is below.

using System;
using System.IO;
using System.Net;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Documents;
using System.Windows.Ink;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Shapes;
using Microsoft.Xna.Framework.Audio;

namespace J2i.Net.VoiceRecorder.Utility
{
    public class AdjustedSoundEffect
    {
        //I will always playback at 44KHz regardless of the original sample rate. 
        //I'm making appropriate adjustments to prevent this from resulting in the
        //pitch being shifted. 
        private const int PlaybackSampleRate = 16000;
        private const int BufferSize = PlaybackSampleRate*2;

        private int _channelCount = 1;
        private int _sampleRate;
        private int _bytesPerSample = 16;
        private int _byteCount = 0;
        private float _baseStepRate = 1;
        private float _adjustedStepRate;
        private float _index = 0;
        private int playbackBufferIndex = 0;
        private int _sampleStep = 2;

        private bool _timeToStop = false;

        private byte[][] _playbackBuffers;

        public bool IsPlaying { get; set;  }

        public object SyncRoot = new object();


        private DynamicSoundEffectInstance _dse;

        public static AdjustedSoundEffect FromStream(Stream source)
        {
            var retVal = new AdjustedSoundEffect(source);
            return retVal;
        }

        public AdjustedSoundEffect()
        {
            _playbackBuffers = new byte[3][];
            for (var i = 0; i < _playbackBuffers.Length;++i )
            {
                _playbackBuffers[i] = new byte[BufferSize];
            }
                _dse = new DynamicSoundEffectInstance(PlaybackSampleRate, AudioChannels.Stereo);
            _dse.BufferNeeded += new EventHandler<EventArgs>(_dse_BufferNeeded);
        }

        void SubmitNextBuffer()
        {
            if(_timeToStop)
            {
                Stop();
            }
            lock (SyncRoot)
            {
                byte[] nextBuffer = _playbackBuffers[playbackBufferIndex];
                playbackBufferIndex = (playbackBufferIndex + 1)%_playbackBuffers.Length;
                int i_step = 0;
                int i = 0;

                int endOfBufferMargin = 2*_channelCount;
                for (;
                    i < (nextBuffer.Length / 4) && (_index < (_sourceBuffer.Length - endOfBufferMargin));
                    ++i, i_step += 4)
                {

                    int k = _sampleStep*(int) _index;
                    if (k > _sourceBuffer.Length - endOfBufferMargin)
                        k = _sourceBuffer.Length -endOfBufferMargin ;
                    nextBuffer[i_step + 0] = _sourceBuffer[k + 0];
                    nextBuffer[i_step + 1] = _sourceBuffer[k + 1];
                    if (_channelCount == 2)
                    {
                        nextBuffer[i_step + 2] = _sourceBuffer[k + 2];
                        nextBuffer[i_step + 3] = _sourceBuffer[k + 3];
                    }
                    else
                    {
                        nextBuffer[i_step + 2] = _sourceBuffer[k + 0];
                        nextBuffer[i_step + 3] = _sourceBuffer[k + 1];

                    }
                    _index += _adjustedStepRate;
                }

                if ((_index >= _sourceBuffer.Length - endOfBufferMargin))
                    _timeToStop = true;
                for (; i < (nextBuffer.Length/4); ++i, i_step += 4)
                {
                    nextBuffer[i_step + 0] = 0;
                    nextBuffer[i_step + 1] = 0;
                    if (_channelCount == 2)
                    {
                        nextBuffer[i_step + 2] = 0;
                        nextBuffer[i_step + 3] = 0;
                    }
                }
                _dse.SubmitBuffer(nextBuffer);
            }
        }

        void _dse_BufferNeeded(object sender, EventArgs e)
        {
            SubmitNextBuffer();
        }

        private byte[] _sourceBuffer;
        

        public AdjustedSoundEffect(Stream source): this()
        {
            byte[] header = new byte[44];
            source.Read(header, 0, 44);

            // I'm assuming you passed a proper wave file so I won't bother 
            // verifying  that  the  header  is properly formatted and will 
            // accept it on faith :-)

            _channelCount = header[22] + (header[23] << 8);
            _sampleRate = header[24] | (header[25] << 8) | (header[26] << 16) | (header[27] << 24);
            _bytesPerSample = header[34]/8;
            _byteCount = header[40] | (header[41] << 8) | (header[42] << 16) | (header[43] << 24);
            _sampleStep = _bytesPerSample*_channelCount;
            _sourceBuffer = new byte[_byteCount];
            source.Read(_sourceBuffer, 0, _sourceBuffer.Length);


            _baseStepRate = ((float)_sampleRate) / PlaybackSampleRate;
        }

        /// <summary>
        /// 
        /// </summary>
        /// <param name="pitchFactor">Factor by which pitch will be adjusted. 2 doubles the frequency,
        /// // 1 is normal speed, 0.5 halfs the frequency</param>
        public void Play(float pitchFactor)
        {
            _timeToStop = false;

            _index = 0;
            lock (SyncRoot)
            {
                _adjustedStepRate = _baseStepRate * pitchFactor;
                _index = 0;
                playbackBufferIndex = 0;
            }
            if(!IsPlaying)
            {
                SubmitNextBuffer();
                SubmitNextBuffer();
                SubmitNextBuffer();
                _dse.Play();
                IsPlaying = true;
            }
        }

        public void Stop()
        {
            if(IsPlaying)
            {
                _dse.Stop();
            }
        }
    }
}

Adding an E-Mail Account to the WP Emulator

For one reason or another you may find that you want to add a real e-mail account to the Windows Phone emulator. Unfortunately the emulator doesn't directly expose a way for you to do this; the settings area on the phone doesn't display the tile to access the e-mail settings. You can get to the settings application indirectly though. This path is convoluted, but it works.

You'll need to make a simple application that does nothing more than show a phone call task. Once the task is displayed accept the phone call then select the option to add another caller. This takes you to the People Hub. Swipe through the People Hub to the "What's New" and you will be prompted to add a Facebook or Twitter account. Select the option to do this (even though you are not really adding an account of that type) and when you asked what type of account you want to add you can select one of the e-mail account types.

Setting Custom Ringtones from Code [Mango:Beta 1]

Written against pre-release information

One of the new features coming with the next update to Windows Phone 7 is the ability to set custom ring tones. From within code you can make a ring tone available to a user (it's up to the user to accept the ring tone, so user settings won't ever be changed without user permission). I was looking at the new API for doing this, the SaveRingtonTask()

To use the API you first need to get the ringtone of interest into isolated storage. It can be either an MP3 file or a WMA file up to 30 seconds in length. If the file is a part of your application. Just set it's build type to "Resource".

file settings

Getting the file from being packed in the application to isolated storage is a matter of reading from a resource stream and writing to isolated storage.

var
s =
    Application.GetResourceStream(new Uri("/MyApplicationName;component/1up.mp3",
                                            UriKind.Relative));
{
    using (var f = IsolatedStorageFile.GetUserStoreForApplication().CreateFile("1up.mp3"))
    {

        var buffer = new byte[2048];
        int bytesRead = 0;

        do
        {
            bytesRead = s.Stream.Read(buffer, 0, 1024);
            f.Write(buffer, 0, bytesRead);
        } while (bytesRead > 0);

        f.Close();
    }
}

Once the file is in isolated storage you must pass the URL to the SaveRingtoneTask(). URIs to isolated storage are preceded with "isostore:" (there is also an "appdata:" prefix, but we won't be using it here). Give the ringtone a display name and call the show method to present the user with the option to save it. If you don't set the

SaveRingtoneTask srt = new SaveRingtoneTask();
srt.DisplayName = "1up";
srt.IsoStore= new Uri("isostore:/1up.mp3", UriKind.Absolute);
srt.IsShareable = true;
srt.Show();

Peer Communication on Windows Phone 7

Written against pre-release information

One of the new things that we get with Windows Phone 7 is socket support. While I expected to be able to open sockets to other machines with servers running on them one thing caught me by surprised; that you can also send communication from phone to phone using UDP. I've got to give credit to Ricky_T for pointint out the presence of this feature and posting a code sample. I wanted to try this out myself. So I made a version of the code sample that would run on both Windows Phone and on the desktop (Silverlight 4 in Out of Browser mode). I was pleasantly surprised to that I was able to open up peer communication between the desktop and phone without a problem. This capability provides a number of solutions for other problems that I've been considering, such as automatic discovery and configuration for communicating with services hosted on a user's local network. 

Most of the code used in the desktop and phone version of this example are identical; I've shared some of the same files between projects. From the files that are not shared the counterparts in the phone and desktop version are still similar.  The core of the code is in a class called Peer. Let's take a look at part of the body of that class. 

 

//Define the port and multicast address to be used for communication
private string _channelAddress = "224.0.0.1";
private int _channelPort = 3007;

//The event to be raised when a message comes in
public event EventHandler<MessageReceivedEventArgs> MessageReceived; 

//the UDP channel over which communication will occur.
private UdpAnySourceMulticastClient _channel;

//Create tje cjamme;
public void Initialize()
{
    _channel = new UdpAnySourceMulticastClient(IPAddress.Parse(_channelAddress), _channelPort);
}

//Open the channel and start listening
public void Open()
{
    if (_channel == null)
        Initialize();
    ClientState = ClientStatesEnum.Opening;
            

    _openResult = _channel.BeginJoinGroup((result) =>
                                                {
                                                    _channel.EndJoinGroup(result);
                                                    ClientState = ClientStatesEnum.Opened;
                                                }, null);   
            
    Receive();
}


 

//The receive method is recursive. At the end of a call to receive it calls itself 
//so that the class can continue listening for incoming requests.
void Receive()
{
    byte[] _receiveBuffer = new byte[1024];

    _channel.BeginReceiveFromGroup(_receiveBuffer, 0, _receiveBuffer.Length, (r) =>
    {
        if(ClientState!=ClientStatesEnum.Closing)
        {
            try
            {
            IPEndPoint source;
            int size= _channel.EndReceiveFromGroup(r, out source);
            OnMessageReceived(_receiveBuffer, size,  source);                                                                                   
            }
            catch (Exception )
            {
            }
            finally
            {
                this.Receive();
            }
        }
    }, null);
}
public void Send(byte[] data)
{
    if(ClientState==ClientStatesEnum.Opened)
    {
        _channel.BeginSendToGroup(data, 0, data.Length, (r) => _channel.EndSendToGroup(r),null);
    }
}

This class only sends and receives byte arrays. My only goal here was to see the code work so there are other considerations that I have decided to overlook for now. I made a client to use this code too. The client sends and receives plain text. Before sending a block of text it is necessary to convert the text to a byte array. The encoding classes in .Net will take care of this for me. When a message comes in I can also use an encoder to convert the byte array back to a string.

For this program I am adding the incoming message to a list along with the IP address from which it came

void _peer_MessageReceived(object sender, MessageReceivedEventArgs e)
{
    Action a = () =>
                    {
                        string message = System.Text.UTF8Encoding.Unicode.GetString(e.Data, 0, e.Size);
                        MessageList.Add(String.Format("{0}:{1}", e.Endpoint.Address.ToString(), message));
                        OnIncomingMessageReceived(message, e.Endpoint.Address.ToString());
                    };
    if (UIDispatcher == null)
        a();
    else
        UIDispatcher.BeginInvoke(a);
}

public void SendMessage(string message)
{
    byte[] encodedMessage= UTF8Encoding.Unicode.GetBytes(message);
    _peer.Send(encodedMessage);
}

When the code is run on any combination of multiply phones or computers a message types on any one of the devices appears on all of them. Nice! Now to start making use of it.

John Conway's Game of Life part 1 of N

The Game of Life is a refinement of an idea from John von Newman in the 1940's. The refinement was done by John Conway and appeared in Scientific America in October 1970. I'll skip over the details of why such a program is of interest. But the program produces some interesting patterns.

The typical version of the game is composed of a grid of cells where some number of cells are initially marked as having life. The grid of cells is evaluated and cells get marked as alive or dead based on a small set of rules based on it's neighbors. Two cells are neighbors with each other if they touch diagonally or side-by-side.

  1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

The above algorithm enough and the program is easy to implement. The challenge is more in creating a decent user interface for the program. I decided to make this program myself. The first step in making the program was to implement the algorithm. I wanted to make sure the algorithm worked so I created a simple XNA program that would allow me to see the algorithm work. It's non-interactive so you can only see the program run but not impact the outcome.

Theres a small amount of data that needs to be tracked for each cell. I need to know if a cell is alive and whether or not it should be alive during the next cycle. The cell will also need to interact with other cells in the community. Some time in the future I plan to allow the cells to express something about the parent from which it came. Though I won't be doing that for this first version.

public class Cell
{
     public CellCommunity  Community   { get; set; }
     public bool           IsAlive     { get; set; }
     public bool           WillSurvive { get; set; }
     public Gene           GeneList { get; set; }
}

The community of cells themselves will be saved in a two dimensional array. The cell community class has two methods that will do the work of calculating whether or not a cell should be alive the next cycle and another for applying the results of those calculations.

public void EvaluateNewGeneration()
{
    ++GenerationCount;

    for (var cx = 0; cx < CellGrid.GetUpperBound(0); ++cx)
    {
        for (var cy = 0; cy < CellGrid.GetUpperBound(1); ++cy)
        {
            var neighborsneighborList = GetNeighborList(cx, cy);
            var len = neighborsneighborList.Length;

            if ((IsAlive(cx, cy)))
            {
                if ((neighborsneighborList.Length > MAX_NEIGHBOR_COUNT) || (neighborsneighborList.Length < MIN_NEIGHBOR_COUNT))
                    KillCell(cx, cy);
                else
                    KeepCellAlive(cx, cy);
            }
            else
            {
                if ((neighborsneighborList.Length ==3))
                {
                    KeepCellAlive(cx, cy);
                }
            }
        }
    }
}

public void ApplySurvival()
{
    for (var cx = 0; cx < CellGrid.GetUpperBound(0); ++cx)
    {
        for (var cy = 0; cy < CellGrid.GetUpperBound(1); ++cy)
        {
            var cell = CellGrid[cx, cy];
            if (cell != null)
            {
                cell.IsAlive = cell.WillSurvive;
            }
        }
    }
}

I decided to make the UI in XNA. I have an idea on how to visualize a cell changing state and I can more easily implement it using a 3D API. Since the "world" of the Game of Life is in a grid I'm going to represent the state of a cell with a square that is either black (if the cell is not alive) or some other color (if the cell is alive). I'm drawing the squares by rendering vertices instead of writing sprites. This give me greater liberty in changing the color or shape of a cell. The following will draw one of the squares.

const int _squareWidth = 5;
const int _squareHeight = 5;
private const int _offsetX = -_squareWidth*30;
private const int _offsetY = -_squareHeight*18;

void DrawSquare(int x, int y, Color c)
{
    _vertices[0].Color = c;
    _vertices[1].Color = c;
    _vertices[2].Color = c;
    _vertices[3].Color = c;

    _vertices[0].Position.X = _offsetX + _squareWidth * x + _squareWidth;
    _vertices[0].Position.Y = _offsetY + _squareHeight * y;

    _vertices[1].Position.X = _offsetX + _squareWidth*x;
    _vertices[1].Position.Y = _offsetY + _squareHeight*y;

    _vertices[2].Position.X = _offsetX + _squareWidth * x + _squareWidth;
    _vertices[2].Position.Y = _offsetY + _squareHeight * y + _squareHeight;

    _vertices[3].Position.X = _offsetX + _squareWidth * x;
    _vertices[3].Position.Y = _offsetY + _squareHeight * y +_squareHeight;

    graphics.GraphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleStrip, _vertices, 0, _vertices.Length-2);     
}

With the ability to draw the square completed it's easy to iterate through the collection of cells and render them to the screen according to whether or not they are alive.

protected override void Draw(GameTime gameTime)
{
    GraphicsDevice.Clear(Color.CornflowerBlue);

    var effect = new BasicEffect(GraphicsDevice);
    effect.World = _world;
    effect.Projection = _projection;
    effect.View = _view;
    effect.VertexColorEnabled = true;
    effect.TextureEnabled = false;
    effect.LightingEnabled = false;

    foreach(var effectPass in effect.CurrentTechnique.Passes)
    {
        effectPass.Apply();
        for (int cx = 0; cx < 60;++cx )
        {
            for(int cy=0;cy<36;++cy)
            {
                Color c = _community.IsAlive(cx, cy) ? Color.Red : Color.Black;
                DrawSquare(cx,cy,c);
            }
        }                    
    }
    base.Draw(gameTime);
}

I manually populated the grid and let it run. I'm happy to say it seems to be working. Now onto designing and making the user interface.

Screen Shot

Streaming from the Microphone to IsolatedStorage

Last week I posted a sample voice recorder on CodeProject. The application would buffer the entire recording in memory before writing it to a file. A rather astute reader asked me what would happen if the user let the recording go long enough to fill up memory. The answer to that question is the application would crash due to an exception being trhown when it fails to allocate more memory and all of the recordingwould be lost. I had already been thinking of a sime reusable solution for doing this but I also offered to the user the following code sample to handle streaming directly to IsolatedStorage.
My two goals in writing it were to keep it simple and keep it portable/reusable. As far as usage goes I can't think of any ways to make it any easier.
   //To start a recording
   StreamingRecorder myRecorder  = new StreamingRecorder();
   myRecorder.Start("myFileName");

  //To stop a recording();
  myRecorder.Stop();
After the code has run you will have a WAVE file with a proper header ready to be consumed by a SoundEffect, MediaElement, or whatever it is that you want to do with it.
In implementing this I must say that I have a hiher appreciation for how MediaElement's interface is designed. The starting and stopping process are not immediate. In otherwords when you call Start() or Stop() it is not until a few moments later that the request is fully processed. Because of the asynchronous nature of these processes I've implemented the event RecordingStateChanged and the property RecordingState so that I would know when a state change was complete. If you are familiar with the media element class then your recognize the similarity of this pattern.
I'll go into further details on how this works along with implemeting some other functionality (such as a Pause method) in a later post. But the code is in a working state now so I'm sharing it. :-)

Here is the source:
public class StreamingRecorder :INotifyPropertyChanged,  IDisposable
{


    object SyncLock = new object();

    private Queue<MemoryStream> _availablBufferQueue;        
    private Queue<MemoryStream> _writeBufferQueue;

    private int _bufferCount;
    private byte[] _audioBuffer;

    //private int _currentRecordingBufferIndex;
        

    private TimeSpan _bufferDuration;
    private int _bufferSize;
    private Stream _outputStream;
    private Microphone _currentMicrophone;
    private bool _ownsStream = false;
    private long _startPosition;

    

    public  StreamingRecorder(TimeSpan? bufferDuration = null, int bufferCount=2)
    {
        _bufferDuration = bufferDuration.HasValue ? bufferDuration.Value : TimeSpan.FromSeconds(0);
        _bufferCount = bufferCount;
        _currentMicrophone= Microphone.Default;   
    }

    private MemoryStream CurrentBuffer
    {
        get; set;
    }

    public void Start(string fileName)
    {
        var isoStore = System.IO.IsolatedStorage.IsolatedStorageFile.GetUserStoreForApplication();
        var targetFile = isoStore.OpenFile(fileName, FileMode.Create);
        Start(targetFile, true);
    }

    public void Start(Stream outputStream, bool ownsStream=false)
    {
        _outputStream = outputStream;
        _ownsStream = ownsStream;
        _startPosition = outputStream.Position;

        Size = 0;

        //Create our recording buffers
        _availablBufferQueue = new Queue<MemoryStream>();
        _writeBufferQueue = new Queue<MemoryStream>();
        _audioBuffer = new byte[_currentMicrophone.GetSampleSizeInBytes(_currentMicrophone.BufferDuration)];
        _bufferSize = _currentMicrophone.GetSampleSizeInBytes(_bufferDuration + _currentMicrophone.BufferDuration);
        for (var i = 0; i < _bufferCount; ++i)
        {
            _availablBufferQueue.Enqueue(new MemoryStream(_bufferSize));
        }

        CurrentBuffer = _availablBufferQueue.Dequeue();
        //Stuff a bogus wave header in the output stream as a space holder.
        //we will come back and make it valid later. For now the size is invalid.
        //I could have just as easily stuffed any set of values here as long as 
        //the size of those values equaled 0x2C
        WaveHeaderWriter.WriteHeader(CurrentBuffer, -1, 1, _currentMicrophone.SampleRate);
        Size += (int)CurrentBuffer.Position;

        //Subscribe to the Microphone's buffer ready event and start listening.
        _currentMicrophone.BufferReady += new EventHandler<EventArgs>(_currentMicrophone_BufferReady);            
        _currentMicrophone.Start();
    }


    void _currentMicrophone_BufferReady(object sender, EventArgs e)
    {
        _currentMicrophone.GetData(_audioBuffer);
        //If the recorder is paused (not implemented) then don't add this audio chunk to
        // the output. If HasFlushed is set then the recording is actually ready to shut
        //down and we shouldn't accumulate anything more. 
        if ((CurrentState != RecordingState.Paused))
        {
            //Append the audio chunk to our current buffer
            CurrentBuffer.Write(_audioBuffer, 0, _audioBuffer.Length);
            //Increment the size of the recording.
            Size += _audioBuffer.Length;
            //If the buffer is full or if we are shutting down then we need to submit
            //the buffer to be written to the output stream.
            if ((CurrentBuffer.Length > _bufferSize)||(CurrentState==RecordingState.Stopping))
            {

                SubmitToWriteBuffer(CurrentBuffer);
                //If we were shutting down then set a flag so that it is known that the last audio
                //chunk has been written. 
                if (CurrentState == RecordingState.Stopping)
                {
                    _currentMicrophone.Stop();
                    _currentMicrophone.BufferReady -= _currentMicrophone_BufferReady;
                }
                CurrentBuffer = _availablBufferQueue.Count > 0 ? _availablBufferQueue.Dequeue() : new MemoryStream();
            }
        }
    }

                

    // CurrentState - generated from ObservableField snippet - Joel Ivory Johnson

    private RecordingState _currentState;
    public RecordingState CurrentState
    {
        get { return _currentState; }
        set
        {
            if (_currentState != value)
            {
                _currentState = value;
                OnPropertyChanged("CurrentState");
                OnRecordingStateChanged(value);
            }
        }
    }
    //-----


    void WriteData(object a )
    {

        lock(SyncLock)
        {                
            while (_writeBufferQueue.Count > 0)
            {
                var item = _writeBufferQueue.Dequeue();
                var buffer = item.GetBuffer();
                _outputStream.Write(buffer, 0,(int) item.Length);
                item.SetLength(0);

                _availablBufferQueue.Enqueue(item);

                if (CurrentState == RecordingState.Stopping)
                {
                    //Correct the information in the wave header. After it is
                    //written set the file pointer back to the end of the file.
                    long prePosition = _outputStream.Position;
                    _outputStream.Seek(_startPosition, SeekOrigin.Begin);
                    WaveHeaderWriter.WriteHeader(_outputStream,Size-44,1,_currentMicrophone.SampleRate);
                    _outputStream.Seek(prePosition, SeekOrigin.Begin);
                    _outputStream.Flush();
                    if (_ownsStream)
                        _outputStream.Close();
                    CurrentState = RecordingState.Stopped;
                }
            }
        }
    }

    void SubmitToWriteBuffer(MemoryStream target)
    {
        //Do the writing on another thread so that processing on this thread can continue. 
        _writeBufferQueue.Enqueue(target);
        ThreadPool.QueueUserWorkItem(new WaitCallback(WriteData));
    }

    public void Pause()
    {
        if ((CurrentState != RecordingState.Paused) && (CurrentState != RecordingState.Recording))
        {
            throw new Exception("you can't pause if you are not recording");
        }
        CurrentState = RecordingState.Paused;
    }

    public void Stop()
    {
        CurrentState = RecordingState.Stopping;
    }


    // Size - generated from ObservableField snippet - Joel Ivory Johnson

    private int  _size;
    public int Size
    {
        get { return _size; }
        set
        {
            if (_size != value)
            {
                _size = value;
                OnPropertyChanged("Size");
            }
        }
    }
    //-----

    public long RemainingSpace
    {
        get
        {                
            return System.IO.IsolatedStorage.IsolatedStorageFile.GetUserStoreForApplication().AvailableFreeSpace;
        }
    }

    public TimeSpan RecordingDuration
    {
        get
        {
            return _currentMicrophone.GetSampleDuration((int)Size);
        }
    }

    public TimeSpan RemainingRecordingTime
    {
        get
        {
            return _currentMicrophone.GetSampleDuration((int)RemainingSpace);
        }
    }

    //-------

    public event EventHandler<RecordingStateChangedEventArgs> RecordingStateChanged;
    protected void OnRecordingStateChanged(RecordingState newState)
    {
        if(RecordingStateChanged!=null)
        {
            RecordingStateChanged(this, new RecordingStateChangedEventArgs(){NewState = newState});
        }
    }

    public event PropertyChangedEventHandler PropertyChanged;
    protected void OnPropertyChanged(string propertyName)
    {
        if (PropertyChanged != null)
        {
            PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
        }
    }

    public void Dispose()
    {
        Stop();
    }
}

 

Tracking High Scores on Windows Phone

Another frequent question I come across in the user forums is related to how some one implements local high scores. The question has come up frequently enough for me to conclude that its to the benifit of the community to have an implementation available that can be used in Silverlight or XNA that is ready to be used with very little setup.

So I've made a solution for others to use. By default the component will keep track of up to 10 high scores and will take care of loading and saving itself. If you add a score the component will take care of ensuring the score is in it's proper place and removing scores that are nolonger one of the top. For persisting the score information I've made use of the DataSaver<T> code from a previous blog post. I hope others will find the solution easy to use.

To get started with using the component add a reference to my component to your project. You'll want to instantiate HighScoreList passing an optional file name that it will use to save score information. It's possible to keep track of more than one high score list as long as your instances have different file names. One might want to do this if they keep track of scores in different modes separately from each other (Ex: a score list for Difficult mode, a score list for Easy mode, and so on).

HighScoreList _highScoreList = new HighScoreList("MyScores"); 

Upon instantiation the component will take care of loading any previous high scores without you doing anything more.

To add a score create a new instance of ScoreInfo and populate its PlayerName and Score fields. (There is also a ScoreDate field that automatically gets populated with the current date and time). Then use the AddScore(ScoreInfo) method on the HighScoreList instance to add it to the score list.

ScoreInfo scoreInfo = new ScoreInfo(){PlayerName = "Jack", Score = 1048576};
_highScoreList.AddScore(scoreInfo);

And that's it, there's nothing more for you to do. When you make that call the score gets added to the high score list, scores that are no longer in the top 10 (or what ever you set the limit to be) will fall off the list, and the list will automatically be persisted back to IsolatedStorage so that it is available the next time your game runs. Easy, right?

As a test project I've created a Silverlight application that allows you to enter new scores and see the behaviour of the component.

Score Keeper Screenshot

The main bits of the source code are below. First the ScoreInfo class which is nothing more than a serializable collection of three properties

/// <summary>
/// ScoreInfo contains information on a single score
/// </summary>
[DataContract]
public class ScoreInfo : INotifyPropertyChanged 
{

                
    // PlayerName - generated from ObservableField snippet - Joel Ivory Johnson
        private string _playerName = String.Empty;

    /// <summary>
    /// The name of the player that made this score
    /// </summary>
        [DataMember]
        public string PlayerName
        {
        get { return _playerName; }
            set
            {
                if (_playerName != value)
                {
                    _playerName = value;
                    OnPropertyChanged("PlayerName");
                }
            }
        }
        //-----

                
    // Score - generated from ObservableField snippet - Joel Ivory Johnson
        private int _score = 0;

    /// <summary>
    /// The score that the player made
    /// </summary>
        [DataMember]
        public int Score
        {
        get { return _score; }
            set
            {
                if (_score != value)
                {
                    _score = value;
                    OnPropertyChanged("Score");
                }
            }
        }
        //-----

                
    // ScoreDate - generated from ObservableField snippet - Joel Ivory Johnson
        private DateTime _scoreDate = DateTime.Now;

    /// <summary>
    /// The date and time that the player made the score. If this field is not 
    /// assigned a value it will automatically be assigned with the date and time
    /// that the score isntance was created
    /// </summary>
        [DataMember]
        public DateTime ScoreDate
        {
        get { return _scoreDate; }
            set
            {
                if (_scoreDate != value)
                {
                    _scoreDate = value;
                    OnPropertyChanged("ScoreDate");
                }
            }
        }
        //-----
    protected void OnPropertyChanged(String propertyName)
    {
        if(PropertyChanged!=null)
        {
            PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
        }
    }

    #region INotifyPropertyChanged Members

    public  event PropertyChangedEventHandler PropertyChanged;

    #endregion
}

And then the HighScoreList class, which is a collection class that contains the .

using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.ComponentModel;
using System.Runtime.Serialization;


namespace J2i.Net.ScoreKeeper
{
    public class HighScoreList : ObservableCollection<ScoreInfo>, INotifyPropertyChanged    
    {
        static DataSaver<HighScoreList> MyDataSaver = new DataSaver<HighScoreList>();

        public HighScoreList()
        {
            
        }

        public HighScoreList(string fileName):this()
        {
            this.ScoreFileName = fileName;
            HighScoreList temp = MyDataSaver.LoadMyData(fileName);
            if(temp!=null)
            {
                foreach(var item in temp)
                {
                    Add(item);
                }
            }
        }
                
        // MaxScoreCount - generated from ObservableField snippet - Joel Ivory Johnson
        private int _maxScoreCount = 10;
        [DataMember]
        public int MaxScoreCount
        {
        get { return _maxScoreCount; }
            set
            {
                if (_maxScoreCount != value)
                {
                    _maxScoreCount = value;
                    OnPropertyChanged("MaxScoreCount");
                }
            }
        }
        //-----


                
        // ScoreFileName - generated from ObservableField snippet - Joel Ivory Johnson
        private string _scoreFileName = "DefaultScores";
        [DataMember]
        public string ScoreFileName
        {
        get { return _scoreFileName; }
            set
            {
                if (_scoreFileName != value)
                {
                    _scoreFileName = value;
                    OnPropertyChanged("ScoreFileName");
                }
            }
        }
        //-----

                
        // AutoSave - generated from ObservableField snippet - Joel Ivory Johnson
        private bool _autoSave = true;
        [DataMember]
        public bool AutoSave
        {
        get { return _autoSave; }
            set
            {
                if (_autoSave != value)
                {
                    _autoSave = value;
                    OnPropertyChanged("AutoSave");
                }
            }
        }
        //-----

        static int ScoreComparer(ScoreInfo a, ScoreInfo b)
        {
            return b.Score - a.Score;
        }

        public void SortAndDrop()
        {
            List<ScoreInfo> temp = new List<ScoreInfo>(this.Count);
            foreach(var item in this)
            {
                temp.Add(item);
            }

            if (temp.Count > MaxScoreCount)
            {
                temp.RemoveRange(MaxScoreCount - 1, (temp.Count) - (MaxScoreCount));
            }

            temp.Sort(ScoreComparer);
            this.Clear();

            temp.ForEach((o)=>Add(o));


        }

        public void Save()
        {
            if(String.IsNullOrEmpty(ScoreFileName))
                throw new ArgumentException("A file name wasn't provided");
            MyDataSaver.SaveMyData(this, ScoreFileName);
        }

        public void AddScore(ScoreInfo score)
        {
            this.Add(score);
            SortAndDrop();
            if(AutoSave)
                Save();
        }
        
        
        
        protected void OnPropertyChanged(String propertyName)
        {
            if(PropertyChanged!=null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
            }
        }

        #region INotifyPropertyChanged Members

        public event PropertyChangedEventHandler PropertyChanged;

        #endregion
    }
}