A interesting article on chips

RFID?Someone called me here Lion Gay?

I’m not sure I really understand Moore’s law, that computing speed doubles every two years. I mean a law suggests that’s the way it is and it is simply not possible that it could be any other way. Computing power doesn’t have to double every two years, it only does so because people work hard to make it so, but they could take a break.

I mean what’s wrong with every 4 or 5 years? If people want to keep up the pace that’s great, but I don’t think it’s necessary for researchers to stress themselves out trying to keep pace with an arbitrary goal.

It’s not a law, it’s just some dude saying a bunch of things…

If the dude saying a bunch of things is popular then it’s a law,

post 20th century logic.

Frighteningly true. I mean not technically true because even then it’s still not technically a law, but if one is popular enough to have an army of followers their sheer numbers can force others to abide by it.

Only then you just have a bunch of people trying to make something into something it’s not. I mean you can say a lump of coal is a gold nugget, you can even trick others into believing it and force the rest at gun point to say it’s true, but it’s still just a lump of coal.

OK you guys actually got me interested in my own thread. Matter of fact due to this discussion maybe this is the more (No pun intended) interesting article. It seems this very knowledgeable guy working for one of the industry leaders made a observation based on six years of data. And, he did so in 1965.

He evidently knew the industry because that observation has pretty well held true up until now. Now some say Moore’s law will no longer apply. A saturation point, business decisions maybe I’m sure I don’t know. Regardless the video is pretty interesting I think.

The name Moore’s Law is geek-humor. It’s similar to something like the Wadsworth Constant. Moore’s Law is just a prediction that held up for a long time.

From Wikipedia:

“Moore’s law” is an observation or projection and not a physical or natural law. Although the rate held steady from 1975 until around 2012, the rate was faster during the first decade. In general, it is not logically sound to extrapolate from the historical growth rate into the indefinite future. For example, the 2010 update to the International Technology Roadmap for Semiconductors, predicted that growth would slow around 2013,[SUP][18][/SUP] and Gordon Moore in 2015 foresaw that the rate of progress would reach saturation: “I see Moore’s law dying here in the next decade or so.”[SUP][19][/SUP]

Kinda reminds me of a recent Twix commercial where the left Twix factory is working late because right Twix is and right Twix is open late because left Twix is. A right Twix worker comments wouldn’t it be funny if they were working late because we are and the president laughs and says “they aren’t that stupid.”

Funny thing is that in reality this is how it works. Company A works harder and harder because company B is and company B is only doing it because company A is.

So…

I guess if they took this advice in the mid 70’s we’d just now be reaching the computing power of the 1990s

Yeah, but we wouldn’t have missed it because we never had it. The only reason we can’t live without it now is because we’re so used to it, but if we never had it to begin with… People in the 90’s, even the 70’s or 80’, weren’t any more or less happy with their lives than people are today so if advancement had been at a slower pace we really wouldn’t be missing out on anything.

We’d be just as excited about the latest pacman level graphics as we are about crysis level graphics simply because it’s new.

I am a subscriber to Wirth’s Law and Gates Law:

“Software manages to outgrow hardware in size and sluggishness.”
“The speed of software halves every 18 months.”

I find that using modern microsoft word and excel is slower than it was back in mid 2000’s. And it’s not like you can do more in modern versions than you could back then. You can still just edit words and make spreadsheets.
In 2010 I had an ancient clunker of a computer: 1.6ghz single core, 386Mb ram. I loaded a linux on it and started pulling things out. Out went the display manager, out went all the printer drivers. Out went every package I could remove while keeping a running system. What do you know? It was great. It booted in 10 seconds or so, and could do word processing just fine.

The same things apply to networking:

  • Network bandwidth is always increasing
  • So we slow it down with less performant content.
    The modern webpage is over 2mb, and does not contain 2mb of content. Page load times are actually increasing if you ignore caching.

It probably applies in the real world as well:

  • Houses are getting bigger
  • We put more stuff in them so we actually have less space to live.

Where does this leave us? How do we get the benifit of moores law, get faster webpages and have more space at home? Minimalism. Not obsessively so, don’t throw out everything. But sometimes, just consider that more isn’t always better. Better is better.

As time goes on we see greater use of generic libraries rather than code written for a specific task. Really it’s the company and developers saving time and money at the expense of runtime performance because code written specifically for the application can be optimized for that application.

Take my latest game project for instance. I have a mini-map that renders all the game nodes to an off screen buffer. For the map the nodes consist of quads that display an icon indicating the type of unit and/or resource in that node. The color of the icon will vary depending on the faction or resource and whether it is visible, mapped or unexplored.

Doing this with the game engine I’m using, jMonkeyEngine, out of the box would require loading a bunch of different quads each with their own material and changing up the textures and colors of the material for each individual quad. Now, in 3d rendering, it’s a lot faster to render a single high resolution mesh than many low resolution meshes.

To speed this up I made a custom mesh so only a single mesh is needed to render the map rather than many quads. Then I had to write my own material/shaders and create texture atlases so in order to change the map node displays individually I modify a couple of UV layers.

This is much faster and uses less memory, but not really reusable for other applications. The largest map in my game is 64x64 nodes so we’re talking 4096 quads and materials vs 1 mesh and 1 material.

Anyway in reference to the rate of progress. You know today we look at societies in other parts of the world living with early 1900s technology and think oh those poor people, they need to have access to the internet and laptops.

And a hundred years from now we would look at societies living with today’s technology and think oh those poor people they need to have quantum laptops. And another hundred years we would look at societies living with that technology and think oh those poor people, they need to have access to the grav-net!

This means that no matter how hard we work or how fast we progress we will always be those poor people.

This PC component reminds me Tennis bat.

It’s not only that, a lot of major software vendors will tend to keep adding stuff to their applications without much in the way of refactoring and/or rewriting areas where the code is getting outdated and messy (so you have a continuous accumulation of cruft along with hacks that gum up the performance and lead to the major accumulation of bugs and broken features that some major solutions are now infamous for).

I can see why one might want to delay any refactoring due to pressure from the corporation to get marketable features out the door (that is also being coupled with strict deadlines), but some companies are (fortunately) realizing the need to rewrite aging parts of their software such as what Maxon is doing with their supposed full core rewrite of Cinema4D.

I imagine it’s also really expensive to re-write a large application like that. I think you also run into issues, particularly with large organizations, where developer skill sets and coding styles vary widely enough that it effects the overall performance of the code because a number of developers might have trouble understanding how to implement a number of methods written by another and, not wanting to appear less capable, just try to fudge it rather than seek help.

Poor documentation can also lead to under-performant code basically because it leads up to the aforementioned situation. I think poor documentation often arises from, like in your scenario, strict deadlines. Recently I created a true type font rendering library for jMonkeyEngine which relies on java.awt.Font for parsing the font files, but I also tried Google’s sfntly library for reading glyphs from the files and found that there’s literally no documentation whatsoever on sfntly so ultimately I relied on going through the source to figure out how to use it.

Once I got it up and running I really liked sfntly, but I found it had the same problems as java.awt.Font so ultimately I didn’t see a need to spend the time converting my entire project to use sfntly.