If the Bitgrid is such a great idea, why haven’t I heard of it before?
One word answer: “Efficiency”
My review of the current literature, aided by my trusty pal Google has shown that the past 25 years of programmable/custom logic design is focused on serving one God, efficiency. All of the designs I’ve seen (admittedly a small subset because I’m not a professional circuit engineer) optimize on some or all of these common goals:
- Speed
- Power
- Size
- Circuit complexity
- Unit cost
They do this for a very good set of reasons. You want the lowest power dissipation because it makes it easier to feed and clean up after. You want the fastest speed because that is the driving factor for using hardware instead of software. You want the smallest design size so that you use less die area, and have less chance of a losing a chip to a defect. The circuit complexity goal drives a huge investment in design tools to automate design tasks as much as possible.
The things that are often traded away for these goals in a chip design are:
- Flexibility
- Fault tolerance
- Engineering costs
The primary reason for going with hardware in the first place is usually speed. If speed is not an issue, then it is usually a good idea to do a given task in software. Software is infinitely malleable, and far easier to patch and update.
Fault tolerance is usually excluded from designs of custom chips because it is difficult to achieve, and is better addressed by testing and quality control measures. Only when a given feature of a design is homogenous such as in RAM or ROM, is the option to include spares included in a design.
Engineering costs are usually considered last in a mass produced chip, but they are never trivial. The processes are optimized to automate as much of the design work as possible away from human engineers, but there are always going to be complexity limitations imposed by the heterogeneity of the elements in a given Programmable Logic / Custom ASIC design.
When viewed from the perspective of the design community I’ve observed, it becomes obvious why nobody has built a bitgrid yet… it’s inefficient as hell by their criteria. An insider would never seriously consider a bitgrid design.
That, gentle reader, is why you haven’t heard of the bitgrid before.
Reasons to consider the bitgrid
One word answer: “Efficiency”
Only in this case, a different set of parameters to optimize on:
- Flexibility
- Fault tolerance
- Testing time
- End users
The bitgrid is based on a single basic component, with a known, easily comprehended design, in an orthogonal grid. The homogenous nature of the grid makes it trivial to relocate a given portion of an application program.
It can route around a bad cell, if one is found. Each and every cell is a programmable wire at minimum utilization. As long as extensive faults are not present, and slack is available, it should be feasible to route around bad cells in almost any design.
Because it’s possible to test the RAM that stores the programming along with the bitgrid cells one at a time, it should be very easy to quickly and confidently test a chip with a minimum of testing equipment complexity.
Because of the simple nature of the bitgrid, a set of graphical tools for design and debugging can be built and will be applicable to any implementation of the chip. The parameters of the IO pins and array size are the main constraints to give to the tools. There are no design heterogeneity obstacles to complicate tool development.
There are, of course some bit trade offs made, including:
- Speed
- Latency
- Power
- Die Size
- Unit cost
Compared to an ASIC, a bitgrid will always be slower, use more power, consume a larger die space and cost more per unit. A given bit will have to traverse at least 4 gates per cell, just to emulate a wire. The end user, will be compelled to expend some time optimizing their programming to fit in the smallest available bitgrid device applicable, of course.
Summary
I’m confident that as Moore’s law continues to drive down transistor prices, the bitgrid will become seen as a viable computing architecture for select applications, and may possibly mature into general use over time. The closest analogy I can evoke at this time is to the debates that were made with the introduction of high-level computer languages. It occurred when the cost of computation was driven below the cost of the programmers. I believe this transition is on its way for silicon.
I believe the best way to predict the future is to invent it. I hope you like my invention.
--Mike—
March 16, 2005
No comments:
Post a Comment