J2i.Net

Nothing at all and Everything in general.

Begining WCF for Windows Phone Developers

I'm back in the USA from the Bahamas, a place where Internet Access cost me 0.50USD/minute (thus I hardly used it).

A few questions have come up in the Windows Phone developer forum centered around serialization and WCF. I've been working on an introduction to WCF for WP7 developers (though do to recent travel and holidays I'm not done with it yet) and am releasing it in parts. In this first part I touch on Serialization and create a simple WCF service. I'm starting off with desktop WCF projects and then will transition to Windows Phone 7 specifics. Because WP7 supports a subset of WCF I won't venture far into WCF functionality that isn't available on WP7.

Prerequisites

I'm writing this targetting the developer that is just getting started with Windows Phone development. The developer will have already have learned the C# development language and have become comfortable with the Windows Phone application model of their choice (Silverlight or XNA). A developer in my target audience doesn't necessarily have a strong web services background.  If you still need to get an understanding for Windows Phone 7 programming you might won't to check out this free learning material by Rob Miles.

Required Software

The software required for this article is Visual Studio 2010. The Express edition of Visual Studio that comes with the Windows Phone tools won't be sufficient; I'll be using som desktop programming examples in this article.

Why WCF

With the ubiquity of Internet access it's common for programs to rely on services hosted on other machines. Some times this is becaus edata consumed by the program is kept in some centralized location. Othertimes it is to distribute computational load. Having a machine on which to host a service also facilitates communication between different client instances such as Instant Messenger type interactions. What ever your reason there are a lot of scenarios for which programs may need to communicate with other machines.

Communication among different clients can be implemented in a number of different ways. Clients may communicate with a service over raw socket connections in which they send messages to the server and receive data in a format completly authored by developers. Or a client may communicate using HTTP web request, much like your browser does. It is even possible for clients to use e-mail protocols for sending messages to other services. The number of ways in which communication could be implemented is countless. What ever the method you choose it will be necessary for both the client you are developing and the service that is providing some functionality agree on the formats used for the data.

WCF, or Windows Communication Foundation provides the functionality that a developer can use to quickly implement communication between services and clients. Instead of being burdened by implementing communication at the socket level WCF allows you to specify how communication will occur in higher level terms and will take care of managing communication channels that conform to what you've specified. You can specify that you want communication to occur over certain protocols (many of which are standards complient) along with marking which elements of data or functionality will be exposed to the end user. WCF will take care of converting your data elements into a format that can be transfered ofer the connection and will take care of reassembling the new objects on the other side of the connection.

Serialization

Whether you are sending data over a network connection or saving it to storage your data needs to be serialized. Serialization is simply converting data to a format that is appropriate for these purposes. At first one may wonder why the data needs to be converter. After all the data is nothing more than bits in memory, and bits can be transmited. But without conversion thos bits may not have much meaning if transmited in an unconverted format. The bits could contain the name of a file that doesn't exists on the remote system. Those bits could contain a pointer that may not point to the same item of data on another machine (or they may not point to any relevant data on the same machine during a different session!). Also some data may not need to be transmited or saved. If I made a class representing a rectangle I may decide that I only need to serialize the Width and Height members but not the Area member (since I can always recalculate it from the other two members).

Classes that serializer our data are called serializers. The two serializers that I will discuss here are the DataContractSerializer and the XmlSerializer. A number of data types are already serializable. These include the various numeric types (double, int, byte, float and so on), string, and arrays composed of these data types. We don't need to do any extra work to be able to serialize these. It's the complex data types for which we need to do some more work. Let's start with a stereotypical employee class.

class Employee
{
   public int     Number { get; set; }
   public string  Name { get; set; }
   public string Position { get; set; }
}

Without a serializer if you wanted to write entities of this type to a file you would need to decide on some way of delimiting your data and write code to place each value in your file. To read from the file you would also need to write code that would load the values from the files in the same order. If there were a change to your data type you would also need to make changes to your code for reading and writing. That amounts to a lot of busy work for a result that is in no way intelectually rewarding nor is it productive. When using a serializer things are much simpler. One only needs to instantiate a serializer telling it what type of data it will be serializing and then use the serializer to read or write the stream. The following code demonstrates what must be done. Note that this code is written to run on a desktop so that we can more easily get to the resulting file.

var e = new Employee()
            {
                Number = 515148, 
                Name = "Joel Ivory Johnson", 
                Position = "Owner"
            };

XmlSerializer employeeSerializer = new XmlSerializer(typeof(Employee));
using(StreamWriter sw = new StreamWriter("employeeData.xml"))
{
    employeeSerializer.Serialize(sw,e);
    sw.Close();
}
            

Running this code will result in an error though. The error will say that Employee is inaccessible due to its protection level. To correct this Employee must be declared as public and then the code will work. The program's resulting file will be found in it's directory and it's content looks like the following

<?xml version="1.0" encoding="utf-8"?>
<Employee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <Number>515148</Number>
  <Name>Joel Ivory Johnson</Name>
  <Position>Owner</Position>
</Employee>

What if I have a property on the class that cannot be serialized? To invoke this senario I'm adding a new property of type IntPtr which is undoubtedly not serializable.

public class Employee
{
    public int Number { get; set; }
    public string Name { get; set; }
    public string Position { get; set; }
    public IntPtr X {get; set; } //This will not serializae
}

Making the change will result in an error message stating "There was an error reflecting type 'Employee'."" I don't want the pointer element serilized. By placing the [XmlIgnore] on the property and all will be well again.

There are a number of other attributes that can be placed on a class being serialized with the XmlSerializer. I won't discuss those within this article but mention it so that those that wish to have control over the resulting Xml know that there are additional options.

DataContractSerializer

The DataContractSerializer appears to work like the XmlSerializer at first glance. Let's take a look at the source for the same program if it used the DataContractSerializer instead of the XmlSerializer along with its output.

<Employee xmlns="http://schemas.datacontract.org/2004/07/J2i.Net.Example01.DataContractSerialization" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
  <Name>Joel Ivory Johnson</Name>
  <Number>515148</Number>
  <Position>Owner</Position>
  <X xmlns:a="http://schemas.datacontract.org/2004/07/System"><value i:type="b:long" xmlns="" xmlns:b="http://www.w3.org/2001/XMLSchema">0</value></X>
</Employee>

The data contract serializer was able to serialize the X member, but I don't want that element serialized. To solve this problem I can add the [IgnoreDataMember] attribute to the code. There's something missing though. Typically if you are designing a class to be used by the DataContracySerializer it will be decorated with the [DataContract] member. If you add the [DataContract] attribute to the class and run it again we'll end up with an empty XML files. While the XmlSerializer is an opt-out serializer (it will try to serialize everything unless told to do otherwise) the DataContractSerializer is an opt-in serializer (It won't serialize anything unless a field is marked otherwise). So to correct this we need to add [DataMember] to each one of the fields that we want serialized. In general you'll want to have some control over how your class is serialized, so serialization with the [DataContract] attribute is preferred.

The Service Contract

While the Data Contract allows you to declare constructs used for passing data arount the Service Contract let's you make declarations on functionality that will be provided. Like the data contract you will decorate a class with attributes. But instead of specifying what data is available to external entities the attributes will specify what functionality is available to external entities. For the sake of cleanly separating the specification for functionality provided from the implementation of that functionality I'll be using interfaces to declare functionality. Of course the implementation for that functionality will be in a class that inherits from that functionality.

To get started let's make a simple service that has a few pieces of simple functionality. I'll call this our Mathimatics service. This service can take two numbers and either multiply them or return the difference between the two numbers. Since the data types being passed around are simple  won't need to concern ourselves with data contracts. To get started I'm creating a Windows class library that will contain the interface that defines the service contract and a class that implements the service. The interface is decorated with the [ServiceContract] attribute and the methods on the interface are decorated with the [OperationContract] attribute.

[ServiceContract]
public interface IMathimaticsService
{
    [OperationContract]
    double Multiply(double arg1, double arg2);

    [OperationContract]
    double Difference(double arg1, double arg2);
}

class MathimaticsService: IMathimaticsService
{
    public double Multiply(double arg1, double arg2)
    {
        return arg1*arg2;
    }

    public double Difference(double arg1, double arg2)
    {
        return Math.Abs(arg1 - arg2);
    }
}

The code will compile just fine but in its present state you can't run it. You could create another project and add the assembly produced by the above as a reference, but that would be going against the point of a service. With a service you want your functionality on one machine or process and the client using the functionality is generally in another machine or another process. To make use of this code we'll actually need two more projects. We need a project that will host the functionality and another project that will make use of the functionality. The service can be hosted in just about any Windows application; web application, Windows Form application, console application, and so on. Though generally one will dichotomize the hosting options to web or desktop.

I'll use a console application to host the service for now. There's still some more decisions to be made. A WCF service must always have three things defined when it is hosted; it must have an address, a binding, and a contract. Some call this the "ABCs of WCF" to make it easier to remember. The address will be the machine name or IP address along with the port over which the service will communicate. In most of the examples that follow the address will be localhost:8123. Though when running a client from an actual Windows Phone you'll want to have your computer name in that place. The binding defines how communication will occur. Presently Windows Phone only supports HTTP based communication. So I'll onl be using basic HTTP binding (HTTPS is also supported, but I won't cover setting up your machine to do secure communication). We've already covered what a contract is.

What's an endpoint?

The console application will instantiate a new ServiceHost with the our address and set the binding for it. We want other potential clients to be able to inquire about our services contracts. So we'll need to add a metadata exchange endpoint. With the metadata endpoint added other development tools will be able to look at our service definition and generate code to make the service easier to use for the developer. Without it the developer has no way to know what functionality is available short of it being communicated through some other means. With all of the above defined the only thing left to do is open the communications channel by calling ServiceHost.Open(). The process will need to stay alive so that the service can continue to be available and then we should free up the service's resources by calling ServiceHost.Close().

static void Main(string[] args)
{

    ServiceHost myServiceHost = new ServiceHost(typeof(MathimaticsService),
        new Uri("http://localhost:8123/MathimaticsService"));

    ServiceMetadataBehavior myBehavior = new ServiceMetadataBehavior(){HttpGetEnabled = true};
    myServiceHost.Description.Behaviors.Add(myBehavior);
    myServiceHost.AddServiceEndpoint(typeof (IMathimaticsService), new BasicHttpBinding(), String.Empty);
    myServiceHost.AddServiceEndpoint(typeof (IMetadataExchange), MetadataExchangeBindings.CreateMexHttpBinding(),
                                     "mex");
  myServiceHost.Open();


    //Keep the service from immediatly terminating by waiting on the user to press the Enter key
    Console.WriteLine("Press [Enter] to close");
    Console.ReadLine();

    myServiceHost.Close();

}      

While it would work, there's something I don't like about the above code. What would I need to do to move the code to a different machine or if I needed to host the service on a different port? I would need to change the code, recompile it, and redeploy it. That's no good. I could move the information on the port and address to an external file and read it at run time so that the program is nolonger bound to compile time settings. Microsoft has already included support to do this with the App.config file.  Covering how App.config works is beyond the scope of this writing so what I discuss here in a minimum. I've added a new item to my services host project named App.config. When the project is compiled this file will be copied to a file that has the same name as the executable with .config appended to the end.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <system.serviceModel>
    <services>
      <service name="J2i.Net.MathimaticsService" >
        <host>
          <baseAddresses>
            <add baseAddress="http://localhost:8123/MathimaticsService"/>
          </baseAddresses>
        </host>
        <endpoint address="" binding="basicHttpBinding"
                  contract="J2i.Net.Example02.ServiceContractExample.IMathimaticsService" />
        <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
      </service>
    </services>
  </system.serviceModel>
</configuration>

WCF is already capable of pulling this information from the App.config and using it. The code for our service is greatly simplified.

static void Main(string[] args)
{
    ServiceHost myServiceHost = new ServiceHost(typeof(MathimaticsService));
    myServiceHost.Open();

    Console.WriteLine("Press [Enter] to close.");
    Console.ReadLine();

    myServiceHost.Close();
}

Physics 101 Part 1 of N : Acceleration

Physic 101: Acceleration

When I was in college since I was going after a B.S. of Computer Science I was required to take the calculus based physics classes (those going after a B.A. of Computer Science could take the Algebra based versions instead). Because of some conversion complications with the school changing from the quarter system to the semester system I had to take a full year of physics classes to ensure I didn't loose credits. Since then I've used the math I've learned there for reasons of personal curiousity (though I must admit I've not had the frequent desire to do any calculations around accelerating electrons or mass/energy conversion).

It didn't occur to me until a few weeks ago that I should share this knowledge. I'll stick with talking about mechanics and leave the thermodynamics and quantum physics out of the discussion for now, and all of my code examples will run on WP7 (though they may also work on the Xbox 360 and PC). Let's start with talking about Acceleration.

A word about measurement units. I live in the USA and here we use English units such as inch, foot, mile, pound, and ton. These units don't seem to have much to do with each other and tend to complicate the math that involves these units. The SI units work out much nicer. For those that live in much of the rest of the world this may conform to the way you already do things. For those living in the USA I hope you don't mind conforming to this different way of doing things. But I just plan to avoid using the English system of metrics all together (imagine if the engineers at Lockheed Martin had done the same thing!).

Acceleration

Let's say I told you to create a Windows Phone project that will accelerate a red dot on the screen. Using your rendering technology of choice you position the red dot on the screen and start moving it at a constant rate in some direction. While that works, it doesn't look natural. Acceleration in the real world is usually gradual. An object that accelerated from 0 km/h to 60 km/h also traveled at intermidiate speeds between 0km and 60km. Let's take a look at how acceleration works and then use that to implement a more natural looking program.

Exactly what is acceleration? Accceleration is a change in velocity. Velocity is a combination speed and direction. If you are travelling on a straight road at 60 km/h that information alone doesn't describe your velocity. It only describes your speed. If you are said to be travelling at 60 km/h on a straight road heading north then your velocity has been described (at least within a 2 dimensional sense. I'm ignoring changes in elevation). If you change either your speed or your direction your velocity is changing. Speeding up, slowing down, or turning the steering wheel will cause your vehicle to experience acceleration.

If you increase your speed from 60 km/h to 80 km/h in 5 seconds without changing direction you've changed your speed by 20 km/h. Assuming your changed your speed at a constant rate during that 5 seconds you accelerated by 4 km/h per second.

An astute observer may have noticed that there are two units of time expressed above: the hour and the second. For the sake of consistency we can convert the change in speed from km/h to m/s. Our change in 20 km/h is equal to a change of 55.4 m/s (feel free to check my math here and or anywhere else in this article and inform me if I make any mistakes). So we can now say the rate of accelerate was 11.1 m/s per second (also expressable as 11.1 m/s^2). Technically there's a direction component to the acceleration. When one doesn't state the direction component then it is assumed to be in the same direction in which the object is already travelling. If the acceleration is negative then it is assumed to be in a direction opposing to that in which the object is travelling.

With the above I've discussed enough for some one to model acceleration. In making use of the existing XNA classes I could use a Vector3 to represent the position of an object in three dimensional space. To represent its velocity I can also use a Vector3. The X, Y, and Z values on the Vector3 will indicate how many units the body moves along the X, Y, and Z axis in a second. The rate of acceleration along the X, Y, and Z are once again representable with a Vector3.

public class Body
{
    public Vector3 Position;
    public Vector3 Velocity;
    public Vector3 Acceleration;

    public void ApplyVelocity(float seconds)
    {
        Position += seconds*Velocity;
    }

    public void ApplyAcceleration(float seconds)
    {
        Velocity += seconds*Acceleration;
    }
}

If you've ever taken any courses in calculus then you may recognize velocity as being a first order derivitive of position with respect to time. Acceleration is a first order derivitive of velocity with respect to time, or a second order derivitive of position. It's possible to have a third order element to describe changes in acceleration (I can think of some scenarios involving propulsion engines to which this would apply) but I won't go that far in any of these example.

It is clear in the above code that I am using seconds as my units of time. You can interpret the units of distance to be what ever you want (meters, feet, pixels, yards, whatever). However one decides to map the numbers to the physical world is of no impact to the meaning of the examples. With the above code if I give an object a position, a velocity, and acceleration we can then simulate it's movement by letting the velocity get adjusted by the acceleration (ApplyAcceleration) and then applying the velocity to the position (ApplyVelocity). Rather than manipulate my bodies independently I'll create a world in which to put them and will manipulate them all at once through this world. Since my world will be a collection of bodies I find it appropriate to derrive my world class from a List base.

public class World: List<Body>
{
    public float Width { get; set;  }
    public float Height { get; set;  }

    public World()
    {   
    }

    public void Step(float seconds)
    {
        foreach (var b in this)
        {
            b.ApplyAcceleration(seconds);
        }
        foreach (var ball in this)
        {
            ball.ApplyVelocity(seconds);
        }
    }
}

The Width and Height members on my world have no impact on the logic but are only there for scaling the world on a display (elements that move outside of the world's height or width are not gauranteed to be displayable on the screen, but those within the area of the world will always be rendered).

With this simple implementation it's possible to simulate acceleration. Let's take a look at the XNA source in which I host my two classes.  In addition to the declarations that one get's in an XNA project I've added a few more to the Game1 class.

private World _myWorld;         //For manipulating the elements of my world
private float _worldScale;      //for scaling the world up/down to fit on the screen
private Vector2 _worldOffset;   //for centering the world on the screen
private Texture2D _ballTexture; //an image of a ball to represent the body

In the constructor for the game I need to instantiate my world and add some bodies to it. I'm adding bodies with three types of movement. The first body will have a velocity but zero acceleration. So it will move at a constant speed. The second body will accelerate in the same direction as its initial velocity. So it will speed up. The third object will have a velocity causing it to move up and to the right but it will accelerate in a downward direction. The path that this body follows will conform to what many know as projectile motion.

_myWorld = new World(){Width = 320, Height = 240};
//Constant Motion Body
_myWorld.Add(new Body(){
        Position = new Vector3(10, 70, 0),
        Velocity = new Vector3(0,0,0),
        Acceleration=new Vector3(3.0f,0f,0)
    });

//Body accelerating in same direction as movement
_myWorld.Add(new Body() { 
        Position = new Vector3(10, 140, 0), 
        Velocity = new Vector3(20.0f, 0f, 0), 
        Acceleration = new Vector3(0,0,0)
    });

//Body accelerating in direction not parallel to initial velocity.
// will follow a projectile path.
_myWorld.Add(new Body() { 
        Position = new Vector3(0, 0, 0), 
        Velocity = new Vector3(45.0f, 60f, 0), 
        Acceleration = new Vector3(-1, -9, 0) 
    });

While loading the texture I also calculate a scale factor and offset so that the world will be centered on the screen. With the code written this way it is less dependent on the physical resolution of the device on which it is run. I could achieve more resolution independence if I used a vector image of a ball instead of the bitmap. But for this example the effort to do that isn't outweight by the benefit for my immediate purposes.

protected override void LoadContent()
{
    // Create a new SpriteBatch, which can be used to draw textures.
    spriteBatch = new SpriteBatch(GraphicsDevice);

    float width = GraphicsDevice.Viewport.Width;
    float height = GraphicsDevice.Viewport.Height;

    var widthScale = width/_myWorld.Width;
    var heightScale = height/_myWorld.Height;
    if(heightScale<widthScale)
    {
        _worldScale = heightScale;
        _worldOffset = new Vector2((width-((int)(_myWorld.Width*heightScale)))/2 ,0);
    }
    else
    {
        _worldScale = widthScale;
        _worldOffset = new Vector2(0, (height- ((int)(_myWorld.Height * widthScale))) / 2);
    }


    
    _ballTexture = Content.Load<Texture2D>("canonBall");

}

In the Update method only contains a single new line beyond what is already present in a new project. The line that I have added makes a call so that the World instance will update the bodies' velocities and positions.

protected override void Update(GameTime gameTime)
{
    // Allows the game to exit
    if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed)
        this.Exit();

    _myWorld.Step((float)gameTime.ElapsedGameTime.TotalSeconds);

    base.Update(gameTime);
}

Lastly I need to render the world to the screen. I've made a function to map the coordinates within my world to the coordinates to the screen coordinates based on the scale factor and offset calculated earlier. It is used within the Draw method when rendering the bodies.

Vector2 MapPosition(Vector3 source)
{
    Vector2 retVal = new Vector2();
    retVal.X =  source.X*_worldScale + _worldOffset.X;
    retVal.Y = (_myWorld.Height - source.Y*_worldScale) + _worldOffset.Y;
    return retVal;
}

protected override void Draw(GameTime gameTime)
{
    
    GraphicsDevice.Clear(Color.Black);
    spriteBatch.Begin();
    foreach (var body in _myWorld)
    {
        var newCoord = MapPosition(body.Position);
        spriteBatch.Draw(_ballTexture,newCoord, Color.White);
    }
    spriteBatch.End();
    base.Draw(gameTime);
}

It is easier to see the program running then it is to infer how it operates from static images. So I've recorded the program in action and have uploaded it to YouTube. And motivated by the "because I can!" justification I've also created the solutions so that the example program will run on the PC and the Xbox 360 in addition to running on Windows Phone.

No talk of acceleration would be complete without talking of Newton's second law of motion. The second law of motion describes the relationship between an objects mass, the force applied to that mass, and the object's change in motion. The mathimatic expression for this relationship is F=ma. F is the force applied to the mass. Like velocity and acceleration Force also has a direction component. Even without placing real units of force or mass in the equation there's a few things that can be infered from this relationship. If we rearrange the expression to F/m=a (which is just a different expression for the same relationship) then we can see the difference that the same amount of force would have on objects of two different masses. If we had one object that weighed 12 kilograms and another that weighed 6 kilograms and applied the same amount of force to both we would find the 6 kilogram object would accelerate at twice the rate as the 12 kilogram object. I see this concept employed most often in games in which the player can customize a vehicle and change parts that have various impacts on the vehicle's mass or the engine's strength (force).

The SI unit of force is the Newton (guess who that is named after). A Newton is defined as the amount of force needed to accelerate one kilogram of mass at a rate of a meter/second^2. If you held a kilogram of mass in your hands under the influence of gravity it would exert a force of about 9.81 Newtons downwards. Now let's say you take a coin and apply force to it to slide it across your desk. Once it is no longer under the influence of you pushing it the coin will begin to slow down. This is because of friction. There are usually several different types of friction applied to a moving body. That is a discussion that is deserving of a post on its own. So for now I will bring this post to a close and in the next one I'll continue with either Friction of Gravity.

Building content threw InvalidOperationException D3DERR_NOTAVAILABLE

I was working on a Windows Phone XNA example earlier and decided to run it on my Xbox. After duplicating the project as an Xbox 360 project I kept running into the same error that seemed to have no explanation. 

 

<pre>Building content threw InvalidOperationException D3DERR_NOTAVAILABLE</pre>

 

<p>

It took a bit of time to figure out what was going on here. From doing a search on the Internet I found out that certain Direct3D programs cannot run at the same time as an XNA project. I don't fully understand why. But as it turns out the program that was causing me to experience this problem was the Zune client. Once I closed the Zune client I was able to compile and run my Xbox 360 program. Weird. 

</p>

Calculating Distance from GPS Coordinates

I've been carrying this equation around forever and a day and thought I would share it. With this equation you can calculate the distance between GPS coordinates. I tend to use SI units, but you should be able to easily adjust it for units of your choosing.

using System; 
using System.Device.Location; 
 
namespace J2i.Net.GPS 
    public static class DistanceCalculator 
    { 
 
        public const double EarthRadiusInMiles = 3956.0; 
        public const double EarthRadiusInKilometers = 6367.0; 
        public const double EarthRadiusInMeters = EarthRadiusInKilometers*1000; 
 
        public static double ToRadian(double val) { return val * (Math.PI / 180); } 
        public static double ToDegree(double val) { return val * 180 / Math.PI; } 
        public static double DiffRadian(double val1, double val2) { return ToRadian(val2) - ToRadian(val1); } 
 
 
 
        public static double CalcDistance(GeoCoordinate p1, GeoCoordinate p2) 
        { 
            return CalcDistance(p1.Latitude, p1.Longitude, p2.Latitude, p2.Longitude, EarthRadiusInKilometers); 
        } 
 
        public static Double Bearing(GeoCoordinate p1, GeoCoordinate p2) 
        { 
            return Bearing(p1.Latitude, p1.Longitude, p2.Latitude, p2.Longitude); 
        } 
 
        public static double CalcDistance(double lat1, double lng1, double lat2, double lng2, double radius) 
        { 
 
            return radius * 2 * Math.Asin(Math.Min(1, Math.Sqrt((Math.Pow(Math.Sin((DiffRadian(lat1, lat2)) / 2.0), 2.0) 
                + Math.Cos(ToRadian(lat1)) * Math.Cos(ToRadian(lat2)) * Math.Pow(Math.Sin((DiffRadian(lng1, lng2)) / 2.0), 2.0))))); 
        } 
 
        public static Double Bearing(double lat1, double lng1, double lat2, double lng2) 
        { 
 
            { 
                var dLat = lat2 - lat2; 
                var dLon = lng2 - lng1; 
                var dPhi = Math.Log(Math.Tan(lat2 / 2 + Math.PI / 4) / Math.Tan(lat1 / 2 + Math.PI / 4)); 
                var q = (Math.Abs(dLat) > 0) ? dLat / dPhi : Math.Cos(lat1); 
 
                if (Math.Abs(dLon) > Math.PI) 
                { 
                    dLon = dLon > 0 ? -(2 * Math.PI - dLon) : (2 * Math.PI + dLon); 
                } 
                //var d = Math.Sqrt(dLat * dLat + q * q * dLon * dLon) * R; 
                var brng = ToDegree(Math.Atan2(dLon, dPhi)); 
                return brng; 
            } 
        } 
 
    } 
 

Is My Media Locked?

If you've used the Media element on Windows Phone (or one of the other media related components) then you probably know that it won't work with the phone is connected to Zune. But Zune is needed for debugging. So how do you debug if part of your software package renders your phone half-functional while you are debugging?!  Well, you don't actually need to have Zune running to debug. There was a command line utility in the October update to the Windows Phone Developer Tools called WPConnect.exe. Upon connecting your phone to your computer Zune will open. Close it and run WPConnect.exe and you'll be able to deploy , run, and debug without your media library being crippled. 

But after distribution of your program it's still possible for a user to have their media functionality locked if they try to run the program you wrote while the phone is connected to Zune. You'll probably want to notify the user of what must be done to unlock the full functionality of your program. Eric Fleck of Microsoft had a suggestion that seems to work pretty well (original source here). In short he checks to see if the phone reports that it is connected to an Ethernet adapter. If it does then chances are it is connected to a computer with Zune. There are scenarios in which the phone could report that it is connected to an Ethernet adapter while the media file is not locked (ex: when connected using WPConnect.exe). The code is pretty simple:

 

        void CheckNetworkStatus()
        {
            if (NetworkInterface.GetIsNetworkAvailable())
            {
                if (NetworkInterface.NetworkInterfaceType == 
                    NetworkInterfaceType.Ethernet)
                {
                    MediaState = "Possibly locked, disconnect Zune";
                    return;
                }
            }
            MediaState = "All's Well! Media is available!";
        }

If you want a the code in project form you can find it here

Writing a Proper Wave File

Currently one of the reoccuring questions I see in the Windows Phone 7 forums deals with playing back data that was recorded from the Microphone. Often time developers will write the sound bytes that they receive from a microphone to a file and then try to export them for playback or play them back using the media classes on the phone only to find that the file can't be processed. During my lunch break today I had a chance to throw something together that I think will point those developers in the right direction.

Why Won't the File Play

The file won't play because none of the components or software to which it has has been given know anything about the file. If you record from the Microphone and dump the raw bytes to a file the things you are not writing include the sample rate, number of bits per sample, the file format, and so on. You need to prepend the file with all of these things for it to be usable by the media classes. Having done a quick Bing search I found a description of the needed header on https://ccrma.stanford.edu/courses/422/projects/WaveFormat/. Using that I put together a quick desktop application that produces a playable wave file. I targeted the desktop because the computer I'm using doesn't have the phone developer tools. But the code will pretty much be the same for the desktop as on the phone. The only difference will be in the creation of your file. While I am creating a file stream directly you would create a stream in isolated storage.

Simulating Audio Data

I need some data to write to my file. As is my preference I've created a function that will populate an array of bytes with the output of the Sine function. As it's parameters it takes the sample rate, the length of time that I want the sound to play, the wave's frequency, and it's magnitude (with 0 being the lowest magnitude and 1 being the greatest) and returns the data in a byte array. You would populate your array with the bytes from the recording instead. The code I used to do this follows.

public static byte[] CreateSinWave( 
        int sampleRate, 
        double frequency, 
        TimeSpan length, 
        double magnitude
    )
{
    int sampleCount = (int)(((double)sampleRate) * length.TotalSeconds);
    short[] tempBuffer = new short[sampleCount];
    byte[] retVal = new byte[sampleCount*2];
    double step = Math.PI*2.0d/frequency;
    double current = 0;
            
    for(int i=0;i<tempBuffer.Length;++i)
    {
        tempBuffer[i] = (short)(Math.Sin(current) * magnitude * ((double)short.MaxValue));
        current += step;
    }

    Buffer.BlockCopy(tempBuffer,0,retVal,0,retVal.Length);
    return retVal;
}

Populating the Wave Header

There are better ways to do this, much better ways. But I'm just trying to create something satisficing in a short period of time.

Trival Fact: Satisficing is a phrase coined by Herbert Simon to mean sufficiently satisfying. A satisficing solution may not be the best solution, but it get's the job done!

.

Looking on the chart that describes a wave header I wrote either literal bytes or calculated values, where the calculated values are based on sample rate, number of channels, and a few other factors. There's not a lot to say about it, but the code follows.

static byte[] RIFF_HEADER = new byte[] { 0x52, 0x49, 0x46, 0x46 };
static byte[] FORMAT_WAVE = new byte[] { 0x57, 0x41, 0x56, 0x45 };
static byte[] FORMAT_TAG  = new byte[] { 0x66, 0x6d, 0x74, 0x20 };
static byte[] AUDIO_FORMAT = new byte[] {0x01, 0x00};
static byte[] SUBCHUNK_ID  = new byte[] { 0x64, 0x61, 0x74, 0x61 };
private const int BYTES_PER_SAMPLE = 2;

public static void WriteHeader(
     System.IO.Stream targetStream, 
     int byteStreamSize, 
     int channelCount, 
     int sampleRate)
{

    int byteRate = sampleRate*channelCount*BYTES_PER_SAMPLE;
    int blockAlign = channelCount*BYTES_PER_SAMPLE;

    targetStream.Write(RIFF_HEADER,0,RIFF_HEADER.Length);
    targetStream.Write(PackageInt(byteStreamSize+44-8, 4), 0, 4);

    targetStream.Write(FORMAT_WAVE, 0, FORMAT_WAVE.Length);
    targetStream.Write(FORMAT_TAG, 0, FORMAT_TAG.Length);
    targetStream.Write(PackageInt(16,4), 0, 4);//Subchunk1Size    

    targetStream.Write(AUDIO_FORMAT, 0, AUDIO_FORMAT.Length);//AudioFormat   
    targetStream.Write(PackageInt(channelCount, 2), 0, 2);
    targetStream.Write(PackageInt(sampleRate, 4), 0, 4);
    targetStream.Write(PackageInt(byteRate, 4), 0, 4);
    targetStream.Write(PackageInt(blockAlign, 2), 0, 2);
    targetStream.Write(PackageInt(BYTES_PER_SAMPLE*8), 0, 2);
    //targetStream.Write(PackageInt(0,2), 0, 2);//Extra param size
    targetStream.Write(SUBCHUNK_ID, 0, SUBCHUNK_ID.Length);
    targetStream.Write(PackageInt(byteStreamSize, 4), 0, 4);
}

static byte[] PackageInt(int source, int length=2)
{
    if((length!=2)&&(length!=4))
        throw new ArgumentException("length must be either 2 or 4", "length");
    var retVal = new byte[length];
    retVal[0] = (byte)(source & 0xFF);
    retVal[1] = (byte)((source >> 8) & 0xFF);
    if (length == 4)
    {
        retVal[2] = (byte) ((source >> 0x10) & 0xFF);
        retVal[3] = (byte) ((source >> 0x18) & 0xFF);
    }
    return retVal;
}

That's pretty much all you need to know. To use the code I wrote a simple console mode program.

static void Main(string[] args)
{
    var soundData = WaveHeaderWriter.CreateSinWave(44000, 120, TimeSpan.FromSeconds(60), 1d);
    using(FileStream fs = new FileStream("MySound2.wav", FileMode.Create))
    {
        WaveHeaderWriter.WriteHeader(fs, soundData.Length, 1, 44100);
        fs.Write(soundData,0,soundData.Length);
        fs.Close();
    }
}

I opened the resulting output in Audacity and the results are what I expected.

And of course as a final test I double clicked on the file. It opened in Windows Media Player and played the Sine wave.

So there you have it, the program works! When I get a chance I will try to make a version of this in Windows Phone 7. Those of you that have WPDT without the full version of Visual Studio will not be able to compile this program directly. But the binary is included in the source code if you want to run it.

At the Next Atlanta Silverlight Meeting: WP7

I'll be speaking at the next Atlanta Silverlight Developer's Meeting. If you're in the Atlanta area stop by and say "Hi!". Here's the info.

When: Wednesday, October 27, 2010 6:30 PM
Where: Five Seasons Brewing

Windows Phone: How Did We Get Here and Where are We Headed?

On Wednesday, October 27th, Joel Johnson will be presenting on the past, present and future of Silverlight development on Windows Phone. We will meet at 6:30 pm at 5 Seasons Brewing at the Prado.

Bio

Joel Johnson is a Device Application Development MVP and is currently transitioning in the Windows Phone Development MVP program. He has extensive experience with Windows Mobile, Silverlight and XNA. He has also been the caretaker of one of the rare early Windows Phone devices for several months.

Abstract

With Microsoft's official WP7 launch with AT&T Monday, we should soon be seeing signs of the much anticipated Windows Phone marketing blitz. The Windows Phone marketplace is now open for early submissions and AT&T has announced three new phones which will become available in the US in early November.

Now that we are at the end of the year-long rush by Microsoft to get a phone out before Christmas, Joel will help us to take a moment to see how we got to this point. The Microsoft phone strategy was once guided by a desire for a feature rich device targeted at the enterprise. It is now a guided by a desire for a user-experience rich device targeted at consumers. Moreover, the old developer platform has not only been overhauled but completely replaced with a Silverlight + XNA development platform. Joel will show how these two technologies work together on the phone, demonstrating native XNA features as well as how we as Silverlight developers can tap into the XNA APIs to develop rich Silverlight applications for the Phone.

RSVP to this Meetup:
http://www.meetup.com/The-Atlanta-Silverlight-Meetup-Group/calendar/15095608/

Windows Phone 7 Launch Events

There's plenty of buzz in the air about Windows Phone 7. If you are interested in WP7 then you'll be interested in the following events. 

Monday 11 October at 9:30 AM EDT you can watch the Windows Phone 7 launch event live! Here's the URL for the streaming: http://www.microsoft.com/presspass/presskits/windowsphone/ There's no telling what type of new information we'll hear at the announcement. 

 

The other are the Windows Phone 7 launch events. I've for the information for the events in the USA. If you are in one of the nations in which it will be launced this year you may want to check to see if there are events in your area. The events are free two day events. There will be real Windows Phone 7 devices at the events and plenty of new information on what's coming.

 

Day #

Date

City

State

Venue & Registration Link

Day 1

12-Oct

Boston

MA

Royal Sonesta Hotel Boston

Day 2

13-Oct

Boston

MA

Royal Sonesta Hotel Boston

Day 1

12-Oct

Detroit

MI

Westin Book Cadillac Hotel

Day 2

13-Oct

Detroit

MI

Westin Book Cadillac Hotel

Day 1

12-Oct

Mountain View

CA

Microsoft Silicon Valley Office

Day 2

13-Oct

Mountain View

CA

Microsoft Silicon Valley Office

Day 1

19-Oct

Chicago

IL

Swissôtel Chicago

Day 2

20-Oct

Chicago

IL

Swissôtel Chicago

Day 1

19-Oct

New York

NY

Marriott Marquis

Day 2

20-Oct

New York

NY

Marriott Marquis

Day 1

20-Oct

Dallas

TX

InterContinental Hotel

Day 2

21-Oct

Dallas

TX

InterContinental Hotel

Day 1

20-Oct

San Francisco

CA

San Fran Design Center

Day 2

21-Oct

San Francisco

CA

San Fran Design Center

Day 2

22-Oct

Atlanta

GA

Georgia World Congress Center

Using DynameicSoundEffectInstance

Download the Code (93.1 KB)

After an Atlanta Silverlight Users meeting I was eating with a couple of other MVPs and we were talking about the things we were doing and would like to do with Windows Phone 7. I had mentioned I would like to have direct access to the sound buffer used in XNA. James Ashley immediatly responded with "DynamicSoundEffectInstance!" At the time James had never used it, and I had just discovered it, so I needed to get some more information on how it works. So that night a stayed up a little later than usual so that I could figure out how it works. With the documentation for the method still being in early form I didn't quite find everything that I wanted to know but was able to figure it out.

In writing this I'm going to assume that you know a bit about the mechanics about sound and speakers work. If not you'll want to read the Wikipedia article on Digital to Analog converters.

In this article I simply want to get to the point of being able to play a tone and control it's frequency. From a high level view this is what we will need to do:

 

  1. Create a few byte buffers that will hold the next part of the sound to be played
  2. Populate one of the byte buffers with the wave form to be played
  3. Give the buffer to a DynamicSoundEffectInstance
  4. Tell the SoundEffectInstance to start playing
  5. In response to the BufferNeeded event populate the next buffer and submit it
  6. Goto step 5
Now to convert those steps into something more concrete. Let's start with allocating the buffers. 

Creating the Buffer

The size of the buffer you choose is largely going to be driven by what type of latency you want your sounds to have and the desired quality of the sound you are generating. In general low latency is good. With low latency as there is less of a time difference from when your program generates a sound to when the user hears it. If you made a program to simulate a piano you would want low latency so that the user perceives that the device is playing sound as soon as they press a key on the screen. Naturally you will also want high quality.  But there are trade-offs as you aim for higher quality and lower latency just as there are trade-offs in aiming for low quality and high latency. 

To produce higher quality sounds you will need a higher sample rate. If you raise the sample rate used to play back a sound then you will either need to increase the size of your buffer (so more memory is being consumed) or you will need to populate and supply smaller buffers more frequently (so more CPU time is being consumed). While lower quality uses less memory and less CPU time the negative part is evident; your program won't sound as good. If you were aiming for lower latency you will need to use smaller buffers but that will also mean that the DynamicSoundEffectInstance is requesting new buffers more often (once again more CPU time). My suggestion for the quality of a sound is to aim for something that is good enough. Don't start off at the 48KHz sample rate. Start instead at around 22KHz or lower and see how well that works for your. As for latency with an XNA program aim for a latency that determined by the FPS of your game. If your game is made to run at 30 frames per second then make buffers that are big enough to play 1/30 seconds of sound. A sound can also be in stereo or mono. It goes without saying that twice the memory is needed to generate a sound in stereo than mono.

Let's for now assume that we are creating a DynamicSoundEffectInstance with a sample rate of 22KHz in mono. We could instantiate one with the following:


 

var dynamicSoundEffectInstance = new DynamicSoundEffectInstance(22000,AudioChannels.Mono);

We can calculate the size of the buffers in one of two ways. The DynamicSoundEffectInstance always play 16-bit sound samples (2 bytes). If I wanted to be able to play 1/30th seconds of sound at a 22KHz sample rate the number of bytes needed for this buffer would be 22000*(1/30)*2*1 = 1466. The last two numbers in the equation (2*1) are the number of bytes in a sample multiplied by the number of channels to be played. Were I playing a stereo sample the second number would have been 2 instead of 1. I could have instead asked the DynamicSoundEffectInstance to calculate the size of the needed buffer.

22000*(1d/30d)*dynamicSoundEffectInstance.GetSampleSizeInBytes(TimeSpan.FromSeconds(1d/30d))

Populate the Buffer

The data that you put into buffer is derived from the sound that you are playing. If you've been astutely reading you may have noticed that I've stated that DynamicSoundEffectInstance consumes an array of bytes (8-bits) but the audio must be composed of 16-bit samples. In C++ one might just pass an array to what ever held the data. It would let you do that, even if doing that made no sense. In the C# language one can also do that by wrapping their code in an unsafe block. But many feel that code wrapped in unsafe blocks is potentially not safe (I wonder why). Silverlight won't let you do such th ings. So it's necessary to convert your 16-bit data to byte data using other means. There's a method available for doing so but I'll also describe how to do so manually.

A 16-bit (two byte) number has a high order byte and a low order byte. High and Low order could also be taken to be more significant and less significant. In the decimal number 39 the three is in a more significant position than the nine; it has more of an impact on the final value. The same concept transfers to numbers composed of bytes. Our bytes need to have little endian ordering. The low order byte will need to be placed in our array before the high order byte. The low order byte can be singled out with a bit mask. The high order byte with bit shifting.

byte lowOrder = (byte)(SomeNumber & 0xFF);
byte highOrder = (byte)(SomeNumber >> 0x08); 

Now that you know what needs to be done, here's the utility method that will essentially do the same thing.

Buffer.BlockCopy(
                   sourceBuffer
                 , sourceStartIndex
                 , destinationBuffer
                 , destinationStartIndex
                 , ByteCount)

The sourceBuffer element in this case would be the array of 16-bit integers. The destinationBuffer would be the destination byte buffer. Two things to note. First, the destination buffer must have twice the number of elements as the source buffer (since bytes are half the size of short integers). Second, the last argument is the number of bytes to be copied and not the number of elements. If you get this wrong you'll either get an IndexOutOfRange exception or something that sounds pretty bad.

Start Playing the Sound

Once the DynamicSoundEffectInstance has a buffer I call Play to get things rolling.

Submitting the Buffers to the DynamicSoundEffectInstance

The DynamicSoundEffectInstance has an event called BufferNeeded that will be called when the object is ready for more sound data to be played. If you are making an XNA program you may want to avoid the object getting to the point where it needs to call this. You can reduce overhead by feeding the class data at the same rate at which it is consuming it. This can be easily done by making the buffers big enough to play as much sound as can be played in one cycle of your game loop. If you are making a Silverlight application you'll be making use of this event.From what I've found the DynamicSoundEffectInstance class will hold up to two buffers; playing from one, and has the other in place to be played next. So I prefer to make three buffers so that I have a third buffer into which I can render the next block of sound. When the BufferNeeded event is called it populate and pass the buffer through the SubmitBuffer method. I use the same buffers in a round robin fashion.

FrameworkDispatcher.Update()

This is only needed if you are using the class from within Silverlight. FrameworkDispatcher.Update will need to be called at least once before playing your sound and must continue to be called periodically. The Windows Phone documentation already walks one through a class that will do this. Take a look at this article to see how this class works.

My Sound Function and Controlling the Frequency

While the sound data passed to DynamicSoundEffectInstance must be signed 16-bit integers I wanted to keep my sound generating functions decoupled from this constraint and also decouple from the specific frequency that was being played. I ach ieved these goals in a class I've created named SoundManager. While SoundManager contains the code to generate a sin wave the actual sound function used is assigned to the property SoundFunction. One only needs to assign a different function to this property to generate a different sound.

To decouple from the function from the data format I've created my program so that it expects the sound function to return it's data as a double. The value range returned by the sound function should be in the range of [-1..1]. I'm not doing range checking to avoid the overhead (so if you use my code it's up to you to make sure your code behaves). The function consumes two parameters: a double value to represent time and an integer value to represent channel. Channel would presumably be 0 for the left channel and 1 for the right channel. For generating mono sound this parameter can be ignored. The time parameter indicate which part of the cycle of a sound wave is being requested. The values returned by the sound function from the 0 to 1 would be for one cycle of the sound. From 1 to 2 would be for the second value of the sound, and so on. since the time parameter is being used to represent the position within a cycle instead of actual time the sound function is insulated from the actual frequency being generated. I can change the frequency of the sound being played by increasing or decreasing the intervals between the time values passed. Shorter intervals will lead to lower frequencies. Larger intervals will lead to higher frequencies. Note that the highest frequency that you can create is going to be no higher than half the sample rate. So with a 22 KHz sample rate you would only be able to generate sounds with frequency components as high as 11 KHz. Given that most sounds we hear are a complex mixture of sound components  keep in mind that there may be some frequency components higher than what may be recognized as the dominant frequency. So playing such sounds at a high frequency could result in some of the higher frequency components being stripped out. You can find more information on this concept under the topic Nyquist Rate

 The method FillBuffer will call this function for each sample that it needs to fill the next buffer.

double MySin(double time, int channel) { return Math.Sin(time*Math.PI*2.0d); }

The code for filling the sound buffer is as follows

        void FillBuffer()
        {
            if (SoundFunction == null)
                throw new NullReferenceException("SoundFunction");
            byte[] destinationBuffer = _audioBufferList[CurrentFillBufferIndex];
            if (++CurrentFillBufferIndex >= _audioBufferList.Length)
                CurrentFillBufferIndex = 0;
            short result;
            int currentBufferIndex = 0;
            int deltaBufferIndex = ChannelCount * BytesPerSample;

            for (int i = 0; i < destinationBuffer.Length / (ChannelCount * BytesPerSample); ++i)
            {
                int baseIndex = ChannelCount * BytesPerSample * i;
                //currentBufferIndex = 0;
                for (int c = 0; c < ChannelCount; ++c)
                {
                    result = (short)(MaxWaveMagnitude * SoundFunction(_Time, c));

                    #if(MANUAL_COPY)
                    destinationBuffer[baseIndex + currentBufferIndex] = (byte)(0xFF & result);
                    destinationBuffer[baseIndex + currentBufferIndex] = (byte)(0xFF & (result >> 0x8));
                    currentBufferIndex += deltaBufferIndex;
                    #else
                    _renderingBuffer[i * ChannelCount + c] = result;
                    #endif                    
                    
                }
                _Time += _deltaTime;
            }
            #if(!MANUAL_COPY)
            Buffer.BlockCopy(_renderingBuffer, 0, destinationBuffer, 0, _renderingBuffer.Length*sizeof(short));
            #endif
            OnPropertyChanged("Time");
            OnPropertyChanged("PendingBufferCount");
        }

If you deploy the code attached to this entry you'll have a program that can play a Sin wave. Pretty boring, I know. But I wanted to keep the sound that I was playing in this first mention of DynamicSoundEffectInstance simple. The next time I mention it I want to talk about generating more complex sounds and will probably say little about using the class itself outside of referencing this entry.