So, what's next? The End of Moore's Law & The Rise of AI

20 minutes of interesting talk about how the computation will evolve in the coming years
good vision, and see you later with some comments on what you think about it

The End of Moore’s Law & The Rise of AI

I don’t trust the AI bubble.
It’s just excessive combinatory analysis, but without the capacity of self inspection: An algorithm can find the ideal sound frequency to make a song super popular, but no way of knowing how it reached that.
That leads to unreliable tech unless the whole environment is hermetically sealed. Image recognition software can be led to error with compression artifacts or artificially included noise that the human eye barely notices, because we have a “better” primitive database in our brain to draw conclusions rather than endless trial-and-error comparisons without sense of depth or shape

and of course, people don’t know how to react when a machine error causes harm or kill a living being. even if that death is much less likely to happen than if the operator was human. there are dozens of moral, ethical and behavioral issues before we can use AI to anything but mixing stuff from a database as fast as possible

https://www.fastcompany.com/90247482/beware-the-ai-delusion

1 Like

well am more interested at the “performance and efficience” level than the humman replacement level …

in this video, for example, they highlight the fact that thanks to deep learning algorithms the next generations of gpu and cpu will double or triple the performance decreasing the frequencies in gigaherz …

in practice these new algorithms are able to sort the code written for a single tread cpu in more tread in an efficient way … you will understand that we will double and triple the performance without having to rewrite the current code and in addition we will have lower energy consumption …

There are many areas in computing where machine learning will be introduced in the coming years. Most of them are quite unsexy though :slight_smile:

One active area of research is indexing in databases where the improvements are quite dramatic when machine learning is used. It seems that we have been quite inefficient in building hash functions. This can potentially help in other areas of programming as well where hash functions are used.
Parallel computing is another interesting area, but it likely takes a few years before it becomes more practically viable.

There is constant progress being made in quite a few areas and if you are interested to know what the next thing is, you just need to look what is being researched over a long period of time. Like this, you will unavoidable get an intuition how things are evolving and where something is about to get ready for production.

If you are interested to know what is actually happening, there is no need to watch keynotes, because there seems to be a need these days to overpromise and overhype. They are unfortunately very misleading. The same is true for tech journalists who don’t focus on machine learning. They have usually no clue about what is actually going on and just try get some hype going.

the reason of why I am optimistic about future generations of cpu and gpu, is because the phenomenon of machine learning techniques has become widespread, and since extremely efficient processors have emerged in the computation of specific machine learning algorithms, the developers of the cpu and gpu have become realized themselves that by creating new generations of chips with, for example, a core dedicated to the machine learnig computation that deals with the sorting of the code in more treading, and therefore more cores could really be a great leap in quality … for example I believe that nvidia with rtx gpu and realtime raytracing has done just that …

Thankfully, what you are describing is rather inaccurate and can’t be applied to machine learning as a whole. Let me just give you a few example to illustrate that.

In medical imagine, machine learning can not only be used to predict whether an image for instance contains a tumor (or whatever the machine learning model was trained to predict), but on top of that, they highlight the areas in the image based on which it came to the conclusion.

It is well known that image recognition can be fooled. That’s why the engineers who use them have to consider that. That’s the same for anything that is being developed. It is necessary to understand the limitations and to protect the system if necessary.
You should also not forget that humans can be fooled or irritated too. They also have a tendency to lose the focus and they get tired as well. So, they are not optimal in all scenarios.

There are without doubt plenty of ethical issues which have to be addressed and transparently discussed.
However, claiming that machine learning solutions are not just mixing stuff from a database is incorrect and highly misleading.

It is indeed great to see that machine learning specific hardware becomes more mainstream.

1 Like

The guy in the first video literally starts out with comparing performance for matrix multiplication written in python to optimized code, written in nothing less then C! That is what i call progress. :slight_smile:

2 Likes

i stopped watching after that. :slight_smile:

2 Likes

You didn’t miss anything. I was listening to it while doing other stuff and there are more of those situations. He throws lots of this kind of facts and tries to connect them, which is weird as the facts are kind of pointless and so are the connections.