In March, Microsoft;s External Analysis Staff place out a request for proposal (RFP) for three-year investigation assignments in multicore computing. On July 28, the opening day of its annual Analysis Faculty Summit,
Office 2007 Standard, Microsoft introduced how and where it'll be spending its grant money.
Seven academic research tasks will share the $1.5 million Microsoft allocated for the Safe and Scalable Multicore Computing RFP. According to Microsoft, this RFP is developed to “stimulate and enable bold, substantial study in multicore software that rethinks the relationships among computer architecture,
Office 2010 Keygen, operating systems, runtimes, compilers and applications.”
Microsoft, like many tech leaders, is investing substantial time and revenue of its own to try to help ease the transition to multicore/manycore computing with various parallel-processing advances. At this week;s Analysis Faculty Summit, Microsoft;s Parallel Computing Platform group is set to present on some of this work, including the Parallel Extensions to the .Net Framework and Parallel Language Integrated Query (PLINQ). Representatives from the Microsoft-Intel Universal Parallel Computing Investigation Centers also are set to present their investigation agendas at the conference.
Where is Microsoft investing outside the Redmond walls on the multicore front? Here are the assignments that are being funded under the aforementioned multicore RFP:
Sensible Transactional Memory via Dynamic Public or Private Memory, Dan Grossman, University of Washington: “Integrating transactions into the design and implementation of modern programming languages is surprisingly difficult. The broad goal of this analysis is to remove such difficulties via work in language semantics,
Office Professional 2010 Key, compilers, runtime systems and performance evaluation.”
Supporting Scalable Multicore Systems Through Runtime Adaptation, Kim Hazelwood, University of Virginia: “The Paradox Compiler Project aims to develop the means to build scalable software that executes efficiently on multicore and manycore systems via a unique combination of static analyses and compiler-inserted hints and speculation, combined with dynamic, runtime adaptation. This research will focus on the Runtime Adaptation portion of the Paradox system.”
Language and Runtime Support for Secure and Scalable Programs, Antony Hosking, Jan Vitek, Suresh Jagannathan and Ananth Grama,
Windows 7 64 Bit, Purdue University: “Expressing and managing concurrency at each layer of the software stack, with support across layers, as necessary, to reduce programmer effort in developing safe applications while ensuring scalable performance is a critical challenge. This staff will develop novel constructs that fundamentally enhance the performance and programmability of applications using transaction-based approaches.”
Geospatial-based Resource Modeling and Management in Multi- and Manycore Era, Tao Li, University of Florida: “To ensure that multicore performance will scale with the increasing number of cores, innovative processor architectures (e.g., distributed shared caches, on-chip networks) are increasingly being deployed in the hardware design. This team will explore novel techniques for geospatial-based on-chip resource utilization evaluation, management and optimization.”
Reliable and Efficient Concurrent Object-Oriented Programs (RECOOP), Bertrand Meyer, ETH Zurich, Switzerland: “The goal of this project, starting with the simple concurrent object-oriented programming (SCOOP) model of concurrent computation, is to develop a practical formal semantics and proof mechanism, enabling programmers to reason abstractly about concurrent programs and allowing proofs of formal properties of these programs.”
Runtime Packaging of Fine-Grained Parallelism and Locality, David Penry, Brigham Young University: “Scalable multicore environments will require the exploitation of fine-grained parallelism to achieve superior performance…. Current packaging algorithms suffer from a number of limitations. These researchers will develop new packaging algorithms that can take into account both parallelism and locality, are aware of critical sections, can be rerun as the runtime environment changes, can incorporate runtime feedback, and are highly scalable.”
Multicore-Optimal Divide-and-Conquer Programming, Paul Hudak, Yale University: “Divide and conquer is a natural, expressive and efficient model for specifying parallel algorithms. This crew cast divide and conquer as an algebraic functional form, called DC,
Genuine Office 2007, much like the more popular map, reduce and scan functional forms. As such, DC subsumes the more popular forms, and its modularity permits application to a variety of problems and architectural details.”