Sunday, March 26, 2006

We are taking a break

We regret that due to unavoidable reasons, we need to take a break. We'll not have our regular postings in next two weeks. Regular postings will resume on April 10th. In the mean-time, you may go through our past postings on various topics of interest. We suggest that you use the search box on top to search for past postings on topics of your interest (e.g. 'Intel', 'Nanotechnology', 'iPod', etc...)




Monday, March 20, 2006

Game Physics via Graphics Card

Graphics chip developer Nvidia and physics specialist Havok have announced they will be showing off some new software technology at the Game Developers Conference (GDC) next week in San Jose, California. Running physics calculations through a graphics GPU is not a new idea, it’s been mooted for Xbox 360 – it’s perfectly possible for part of the console’s ATI GPU being fenced-off and used for physics rather than graphical calculations.

Now the concept is being put into practice by Nvidia and Havok. At the GDC the companies will unveil something called Havok FX, which will enable games to run physics calculations through graphics cards supporting Shader Model 3.0 such as the GeForce 6 and 7 cards.

With Havok FX, a gaming system can simulate thousands of colliding rigid bodies, said Nvidia, calculating friction, collisions, gravity, mass and velocity. This lets game developers program their titles to handle complex debris, smoke and fluid effects without bogging down the computer’s CPU. In October of last year, ATI discussed the opportunity to use the massive floating point computation power of a graphics processor to enable accelerated physics - but Nvidia is first to introduce the feature to the market: Called "SLI Physics," the feature will offloads physics calculations from the CPU to the graphics processor and promises to bring movie-type effects from crashing cars and speeding bullets to the PC screen - all with smooth frame rates. A new software driver for Nvidia's graphics cards will use the second graphics processor to enable the feature in future games.

Nvidia says the technology is particularly well suited for computers that are equipped with more than one graphics card connected using Nvidia’s Scalable Link Interface (SLI) technology, which enables multiple Nvidia graphics processors to share the burden of rendering 3D graphics.

To know more about this, see Nvidia's Press release.




Sunday, March 12, 2006

Origami

Microsoft unveiled a new class of computer – the "Ultra Mobile Personal Computer (UMPC) at the CeBIT trade show in Germany. Codenamed "Origami", the device is designed to provide the features of a Microsoft Windows PC but will be about half to one-third the size of a traditional notebook computer and, at 1kg, will weigh about the same as a bag of sugar.

UMPCs are expected to cost $US500-$1000. A handful of manufacturers, including Samsung which built the model shown off by Microsoft at CeBit, have so far committed to make them. They are expected to be on sale next month. Microsoft says it developed the UMPC with Intel to meet chairman Bill Gates' demand for a new category of PC that was "less expensive, lighter and more functional".

The first generation of UMPCs will run Windows XP Tablet PC Edition 2005, but future models will run a version of yet-to-be-released operating system Windows Vista. They will feature a 7-inch touch-screen display and a 30-60 gigabyte hard disk drive. Some models may include built-in GPS, webcams, fingerprint readers and TV tuners. WiFi connectivity is likely to be a must.

There is speculation mobile phone network operators may also choose to support the sale of UMPCs with built-in cellular connections and may end up subsidising the cost of the devices. It will be possible to attach a standard keyboard via a USB port or, wirelessly, using Bluetooth.

Sceptics argue the devices offer neither the capability of a notebook computer nor the convenience of traditional handheld computers, querying also whether the estimated 2.5-hour battery life of UMPCs will prove sufficient to meet customers' needs. Previous attempts to launch new form factor computers, such as the Apple Newton, have flopped.




Saturday, March 04, 2006

LHC's New Milestone

The LHC is being installed in a tunnel 27 km in circumference, buried 50-175 m below ground. It's located between the Jura mountain range in France and Lake Geneva in Switzerland (photo courtsey CERN)

The Large Hadron Collider (LHC) Computing Grid is the world's largest scientific computing grid. The LHC is expected to produce 15 million Gigabytes of data per year, once it is operational in 2007 when it will investigate collisions of Hadron paricles (massive ones among elementary particles, e.g. proton, neutron,..) in order to understand basic laws of Physics. Collecting and storing those data will clearly be a mammoth task.

Recently the worldwide LHC Computing Grid collaboration has completed a service challenge - sustaining a continuous flow of physics data on a worldwide Grid infrastructure at up to 1GB/s. This corresponds to transferring a DVD worth of scientific data from CERN every five seconds.

The data was transferred from CERN in Geneva, Switzerland, to 12 major computer centres around the globe. More than 20 other computing facilities were also involved in successful tests of a global Grid service for storage, distribution and analysis of this data in real-time. The completion of this service challenge is a key milestone on the way to establishing the necessary computing infrastructure for the LHC - the world's largest scientific instrument, which is scheduled to begin operations in 2007.

The result of the service challenge was announced at the international Computing for High Energy and Nuclear Physics 2006 conference in Mumbai, India. The next challenge, due to start in the summer, will extend to many other computing centres and aim at continuous, stable operations. That challenge will allow many of the scientists involved to refine their computing models for handling and analysing the data from the LHC experiments, in anticipation of the start of real data taking in 2007.

For more details visit CERN website.