New Marketing Trends

Marketing Ideas for Non-Profits and Libraries

The M Word helps librarians learn about marketing trends and ideas.

Wednesday, September 19, 2007

Justin Rattner , Intel U- Wait until you see what's coming!




Virtual Worlds... metaverses, paraverses (hybrid virtual worlds).. Intel's developer forum began yesterday and here's soem of the news that's coming out of it. Justins keynote takes place on the 20th- looks liek the webcasts will be posted here..

"Speaking to industry leaders, developers and industry watchers at the Intel Developer Forum (IDF), Otellini showed the industry's first working chips built using 32 nanometer (nm) technology, with transistors so small that more than 4 million of them could fit on the period at the end of this sentence. Intel's 32nm process technology is on track to begin production in 2009." more

Hear Otellini's keynote and get excited! here

1 comment:

Anonymous said...

Allow me to bring the following situation to your attention
(see article below - red added for emphasis): and a response below. Stan Alterman


EE Times:
Wintel will fund parallel software lab at Berkeley
Industry seeks a model for next-gen multicore CPUs
Rick Merritt
(02/13/2008 9:39 PM EST)
URL: http://www.eetimes.com/showArticle.jhtml?articleID=206503988


SAN JOSE, Calif. — Intel and Microsoft will help fund a new Parallel Computing Lab at the University of California at Berkeley. The effort hopes to take a leading role in the scramble to define a parallel programming model that will serve the multicore computer processors already on the drawing board.


As many as 20 universities--including MIT, Stanford and the University of Illinois--competed for the funding. According to one source, the Wintel grant is for about $2 million a year over five years Details of the deal have not yet been released by the companies. But about 14 faculty members will work in the new Berkeley lab that quietly started operation on Jan. 21.
The grant is a sign of how the computer industry is shifting into high gear to help software catch up with advances in microprocessor design. Both Advanced Micro Devices and Intel have said they will ship processors using a mix of x86 and graphics cores as early as next year, with core counts quickly rising to eight or more per chip. But software developers are still stuck with a mainly serial programming model that cannot easily take advantage of the new hardware.
"The industry is in a little bit of a panic about how to program multi-core processors, especially heterogeneous ones," said Chuck Moore, a senior fellow at Advanced Micro Devices trying to rally support for work in the area. "To make effective use of multi-core hardware today you need a PhD in computer science. That can't continue if we want to enable heterogeneous CPUs," he said.


The Berkeley lab got its start in February 2005 with a series of weekly talks on the issue. In December 2006, researchers published a white paper detailing thoughts from those discussions.


A team of researchers has already started prototyping software systems based on ideas the group has fleshed out. They could publish preliminary results in a matter of months.


Essentially, the lab is aiming to define a way to compose parallel programs based on flexible sets of standard modules in a way similar to how serial programs are written today. The challenge in the parallel world is finding a dynamic and flexible approach to schedule parallel tasks from these modules across available hardware in complex heterogeneous multi-core CPUs.


The group believes developers could create a set of perhaps a dozen frameworks that understand the intricacies of the hardware. The frameworks could be used to write modules that handle specific tasks such as solving a matrix. New run time environments could dynamically schedule the modules across available cores of various types.


The new approach would replace the global schedulers used in today's serial software. The frameworks would replace today's parallel libraries which are not always well suited to the specifics of a given parallel application and cannot be easily mixed and matched as needed.


Researchers at the University of Illinois, meanwhile, have explored ways to extract parallelism from today's serial code. They have also worked on compilers and programming models for next-generation graphics chips as well as the Intel Itanium processor.

RESPONSE TO ARTICLE:

Our industry is headed (at full speed) down a dead end software and processing path!
The industry crisis: the irresistible force (i.e. an industry dedicated to Boolean Logic, Clocks, Algorithms and von Neumann architectures, state space, and explicit control, as their fundamental building blocks)
acting against the immovable object (i.e. the true nature of concurrent computing).


If you start with Boolean Logic, you get circuits that glitch. As soon as you put clocked registers on either side of your combinational circuits to filter out those glitches, all you can ever observe about your system, is system state as a function of time (i.e. clock ticks) ...a sequence of system states. The concurrent processing in between clocked registers, (real transistors doing real concurrent processing)is completely hidden from observation behind the clock / memory register construct.


It is, however, a perfect match for algorithm-based sequential programming models - hence the ascent of C.
And you can keep building better & better hardware with this paradigm if you have Moore's Law working for you - hence the ascent of the Pentium. When the task at hand is massively concurrent computing.
You simply "can't get there from here".These same fundamentals also unravel when circuit-level timing relationships among hundreds of millions of transistor become hard to predict (at deep nanometer fabrication nodes) - hence the explosion in ASIC NRE.


Concurrency is not amenable to being explicitly controlled - that's an approach that leads to intractable problems.
But concurrency can be conveniently managed, if you look at it from the right perspective.




Karl Fant and Theseus Research has developed, and owns the patented solutions to this industry crisis.They've developed the first coherent theoretical framework for computing which is distributed & concurrent at its foundations. This recently published underlying theoretical work has yielded a suite of patented technologies that explicitly teach how to easily design & fabricate massively concurrent processors that are stunningly simple to program.(in fact, it unifies, in fundamental and useful ways, the previously disparate concepts of hardware and software)


Theseus Research has recognizedearly that Boolean Logic, clocks, the notion of the algorithm,
and centrally controlled, sequential processor architectures, all must be left behind, as we build a new concurrent computing industry.
They've developed a replacement for Boolean Logic (i.e. NULL Convention Logic) that yields
clockless / glitchless / logically determined circuits, that regulate their own input, and that inherently "coordinate" (handshake) with local neighbors, based on local detection & annunciation of "completeness"
(e.g. received my data - done processing - next processing element downstream has received my output - I'm ready for my next data set)The NCL / delay insensitive circuit level technology is a "done deal" - 25 patents / over 20 chips built (100% first pass design success).


Theseus's glitchless / event-driven, circuit-level paradigm (i.e. NCL),
has been proven successful in silicon at the circuit level, scales directly
- all the way to the processing element, processor architecture, and multiprocessor system architecture level
to the TCP.Theseus' TCP (Theseus Concurrent Processor architecture) is an easily programmable, massively concurrent processing FABRIC.Everything from the circuit-level on up, is logically determined, event driven & delay insensitive.
It is a "behaving structure" as opposed to a "controlled machine". It scales "for free" in terms of number of ALU's per processing element, and number of processing elements, and it ports easily to any fabrication node, because if the transistors switch
(even if we don't know exactly when) - the processor will function correctly.


The TCP is comparatively simple to design:hand lay out one delay insensitive NCL ALU,
then tile it hundreds / thousands of times across the processor.The TCP delivers ASIC speed & power consumption,
along with ultra low EMI (for mixed signal applications)yet it's as generally programmable as a Pentium.
All of this in a silicon footprint that is an order of magnitude (or more) smaller than other processor architectures.
Result: it's like having instant access to dozens / hundreds of ASICs during runtime.


The programming issues that are driving Berkeley's new Lab,as described in Rick Merritt's recent EE Times article
just disappear with the Theseus Concurrent Processor.TCP programmers simply enunciate the dependency graph for their application,
(It is the only true referent for a process - more fundamental than any sequential representation)
and the TCP compiler and architecture do the rest.Once a dependency graph is expressed for an application (i.e. a TCP program written) it can be reused on any configuration of the TCP architecture directly (i.e. small, medium, large, extra large), without ever having to rewrite anything."TCP Software: Write once - Reuse Forever"


The Theseus Research Invocation Model of Process Expression will likely emerge as the foundation of 21st century computer science.
The TCP architecture will likely emerge as the "known best solution" to massively concurrent processors. I believe the Invocation Language will emerge as the preferred programming paradigm for concurrent computers. Theseus Research appears on track with initial USAF and angel investor funding to develop an initial TCP software development environment.They appear to be ready to prototype the TCP in silicon (and are actively pursuing the relatively modest development funding required).


Meanwhile, tens of millions will be pumped in the Parallel Computing Lab, in search of a way to do concurrent computing using many-core processors, somehow magically programmed in C. The industry remains firmly committed to applying the fundamentals
which were quite successful during the era of sequential computing, to a completely new and fundamentally different problem: concurrent computing. The Theseus solution is an unprecedented opportunity to build an industry on the shoulders of truly disruptive ideas and products.
Dr. Stanley B. Alterman