Earlier today Microsoft announced its plans for adjusting to the world economy. The news story I read stated that mid-PC sales were down while server software was up. I can't help but associate that with what a Sony exec called "A race to the bottom" in which consumer and OEMs begin to target building the cheaper computer instead of the higher performing computer. With the increasing emphasis being placed on Cloud Computing and Web Services and it all supports a prediction that software was going to move from the individual PCs back to the servers. We can find evidence of this now. Previously if you wanted to work on a word document you needed a computer with sufficient space and a reasonably powered processor. You would then need to purchase and install Microsoft Word and you could edit documents from your siloed computer. While you can still do this today you also have the option of using an online service to essentially do the same thing. You can use a low powered computer with any of a variety of operating systems as long as that computer has a supported browser. The power of the individual computer matters a little less and connectivity matters a whole lot more. as applications begin to target modern terminals.
It's possibly that my view of the future is slightly exagerated but I doubt it is completely wrong. That being said my plans for adjusting to what I think to be the needs of the future are to accelerate my learning path for Microsoft Azure Services Platoform and Live Services. I've been experimenting with Live Services for some time now and have found them to be useful in quickly putting together useful applications. An Azure application runs across several 64-bit Windows 2008 servers. Installation of patches is handled for you, failover is taken care of, and so are several other maintenance tasks allowing development of highly available, secure, and redundant applications with more ease. Effective use of the platform is going to require a different way of thinking than developing the traditional application so I plan to get started tonight with the intent of having information to teach and share within the next week.
I was reading through Mike Francis's blog and saw a rather useful offer from Microsoft for free eBooks on LINQ, ASP.NET Ajax, and Silverlight 2.0. If you are interested all you have to do is register and you can download the eBooks. I'll be teaching a Silverlight class in a little over a month so I hope to review the Silverlight book fairly soon so that I can consider recommending it to the students.
I'm glad Mike Hall posted on this. There is a free book available to those wanting to learn C#. This is especially convinent for me because I was going to write a series of "Getting Started" articles on Windows Mobile development and I really see the task of learning the C# (or vb.net) language as being seperate from learning how to target Windows Mobile devices. I'll be referencing this book in one of my next articles.
If you've developed a Windows Mobile application and would like to see it listed on the Windows Mobile Application Catalog then it will need to be tested for certification requirements and code signed. This normally cost up to $800. But Microsoft is offering a program whereby you can have these services performed for free. Just visit the Innovation Site and register. Remember, make sure that your application adhere's to certification guidelines before submitting the application. Otherwise you could consume part if the $800 woth of service unnecessarily.
As promised I uploaded the wrapper for the Skyhook Wireless SDK for Windows Mobile to the Skyhook Wireless discussion group. I tried the wrapper out with the 3.0 SDK and it works fine. It exposes all the SDK functionality with the exception of the WPS_tune_location function (I have no idea what that function does). So I am done with my interaction with the Skyhook Wireless SDK for now.
If you program in C# or VB.Net and want to use the SDK then you can get the code from the files area of the Skyhook Wireless discussion group.
I was reading through WMPowerUser yesterday and happened upon an article on Augment Reality on Windows Mobile. The article shows some videos from the Christian Dopler Laboratory in which a mobile device is being used to track photographs and business cards and project 3D elements on them. Very cool! Best of all there is code available for the computer vision library that they used. Follow the link for the code and videos.
In reading through my Google Analytics report I see that there are a few searches that are always at the top of the list.
Beyond having mentioned the Windows Mobile Power Toys once in refering to the "24 Hours of Windows Mobile" webcast I've not actually spoken about it. But given the level of popularity I've decided to write an article covering it along with the EQATEC Profiler and the EQATEC Tracer. All of these are free tools. I wrote my outline for the article last night and am now working on the example code. This will be something usable by both those that use Visual Studio and those that develop for Windows Mobile without Visual Studio.
It's time to roll up my sleves to get my hands dirty with some native code. I've got two ideas for applications of facial recognition, one for the desktop and the other for Windows Mobile. I'll need to take some time to plan these out, but these are my ideas...
There was a time when my family members would use my computer and change my settings (Wall paper, resolution, sound scheme, and so on) and it would annoy me to no end. When Microsoft released Windows XP the problem was solved. We had a much better implementation of user profiles that didn't require any one to remember their user name and didn't leave any one locked out if the previous user had forgotten to log out. Many computers come with web cams, so why not use them to extend upon this concept. Why cant the computer recognize who is sitting in front of it and act accordingly automatically changing profiles.
Where do I know that person from?
I encounter a lot of people that I don't remember (I have a bad memory for faces). Wouldn't it be nice if I could point my camera phone at them and instantly get a reference to their name and face book profile? All the technology and processing needed to do this exists today. Some one just needs to pull it all together.
I am finally nolonger on a project that I really disliked. I had been on the project for 5 months with my role being to configure the software to meet the user's needs. A huge task in this seemingly simple task was a lot of data entry; that could be entry of a user's account information or population of a myrid of other tables and values requireed by the application. The main problem was not so much in the volume of data that the system required but in the manner in which the system was designed to take the data. The system had a very poor user interface. On example of where the interface failed was in the task of creating a user account. After entering a user's name and mocing to the next field I was immediatly prompted to save or discard that change. Saving the change resulted in the user editor closing and I had to reopen it, find the user I was modifying, and change the next property and answer that prompt again. I had to be able to convey the level of inefficiency of this interface to others on my team otherwise any slowness in completing a task could be percieved as being from an inadequacy on my part.
To state "The interface in this system is bad, I don't like it" may not be well received. The statement sounds subjective and can be evaluated as nothing more than some one complaining. To sufficiently communicate the state of the interface I needed to show that it was poor objectivly. But a problem with user interface design is that much of it is performed subjectivly without much evaluation of the objective attributes of the interface. I believe that part of this could be from people simply not knowing that there are objective evaluations for a User Interface. There's several methods of evaluating user interface efficiency. I won't bother to name them here, rather I will refer you to the book "The Humane Interface" written by Jef Raskin (the developer of he Macintosh interface, and the 31st employee of Apple Computer). Raskin covers both interface metrics and his philosiphy when it comes to designing and comparing user interfaces.
Being armed with this knowledge I was able to express with hard numbers the inefficiency of the user interface in this product and point out where the designers went wrong and show how it could have been improved. While the developers of this product did not make the suggested improvements during my time on the project I was able to properly set expectations for completion. That's very important to me since meeting expectations is onf of the performance metrics by which I am measured.