jump to navigation

Report on Microsoft Research Summit 2011 in Paris May 10, 2011

Posted by reto wettach in exhibitions, gadgets, innovative interfaces, new technologies, physical interaction design.
trackback

Besides the post I already wrote about the Microsoft Research Summit (1, 2, 3, 4, 5, 6)I report in my talk about the following inspiring projects:

I met with Sharam Izadi, who was – with our friend Nic Villar the organizer of TEI 2009 in Cambridge. At the Summit he presented a couple of his works, which I think are very inspiring:

He worked on the Second Light Project, which was first shown to the scientific community in 2008, but which they are now preparing for launch for research institutions. The team presented a new idea for SecondLight: using IR-light to track the second layer and therefore being able to display images onto tilted surfaces without distortion.

Sharam was very proud about his first product out on the market: He is the inventor of the Microsoft Touch Mouse, which can interpret multitouch gestures:


(image source)

I think the mouse looks really cool and has a couple of interesting features, e.g. the ability to read three-fingers-gerstures. Sharam is particularly proud on the fact that the capacitive sensor is just printed on the shell – and is not a PCB. With this sensor technology one can make basically any shape a multi-touch-environment.

In his talk Ashram also mentioned another mouse project he did, the SideSight, which allows a multi-touch-input at the side of the phone, using infrared-sensors.


(image source)

Besides Sharam’s work I was inspired by the following presentations:

Microsoft’s new academic search engine, which – similar to a dick lendth comparison – allows ranking of researchers.

The Worldwide Telescope controlled with the Kinect:

Last, but not least the XML VM: This open-source-initiative developed a system, which translate Android apps to other platforms, as iPhone or (at least in near future) to Windows Mobile. They use Android as a well-documented SDK and from a debugging perspective a powerful tool. The cross-compiling seems to work fine, even for quite complex games.

Another nice talk was from Jamie Shotton, who showed his impressive work on Body Part Recognition and Human Pose Estimation. I really like the way they taught the computer all these poses through machine learning. The created millions of poses as the image below and the computer had to learn them… (Jamie showed a slide with all these poses, which was really impressive and beautiful – too bad, that I couldn’t find it online)


(image source)

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: