Do you agree with linus regarding parallel computing?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Do you agree with linus regarding parallel computing?

Post by Brendan »

Hi,
embryo wrote:
Brendan wrote:The only real trick Intel has left is bringing the RAM onto the same chip (which will cause another "one time only" bump in performance).
Are there signs of such plans or it is just some speculation about future possibilities? Just curious.
For the "soon to be released" Xeon Phi (Knight's Landing), Intel are putting 8 to 16 GiB of RAM in the chip. With up to 72 cores they needed the extra bandwidth that shifting the RAM gives.

Intel haven't announced plans to do the same for normal laptop/desktop/server yet. However it's something I'd expect to happen in the next ~5 years; and not necessarily for the extra bandwidth, but also to make mobile devices smaller and to make computers cheaper (and also for extra profit - e.g. people paying Intel extra for RAM built into the CPU).
embryo wrote:
Brendan wrote:Of course there's also the other problem - as soon as you break compatibility you're stuck in "no man's land" where nobody wants to write software for it because there's no market share, and there's no market share because there's no software for it;
That's why we need a vendor independent managed solution (like Java OS), which can provide us with a lot of ready to use software and a market, that is ready to accept new technology. And investments in this case are almost invisible comparing to the new environment creation from the ground up.
If something like Java OS existed and had a non-zero market share, then maybe. Whether we like it or not, this isn't currently the case and unless a CPU manufacturer can convince someone like Microsoft, Apple or Xiomi to adopt their chips they're not going to get anywhere.
embryo wrote:
Brendan wrote:There's a reason that every single architecture that's ever gone "head to head" against Intel/80x86 has died.
The ARM+Android/iOS is going to kick the Intel's @$$. Desktop PCs are loosing market share and mobile processors are gaining performance (like true 8*64-bit core Snapdragon 820, for example).
Intel's market is mostly laptop/desktop/server. ARM avoided going head to head with Intel by staying away from laptop/desktop/server and focusing on embedded and phones. If ARM attempts to move into laptop/desktop/server they won't succeed. In the same way, if Intel attempted to move into embedded/phones (where ARM is established) they probably won't succeed either.

Desktop PCs are not loosing market share to smartphones.

For the PC market, everyone that wants one already has one; and PC sales growth slowed down because the performance difference between the old stuff and the new stuff is too little to justify the cost of upgrading (and recent versions of Windows haven't needed hardware upgrades). It had nothing to do with smartphones.

Smartphone sales are higher, partly because people were moving from "PC" to "PC and smartphone", partly because the newer hardware is noticeably better/faster than the older hardware, and partly because they get lost/broken far more often.

Basically, correlation does not imply causation.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo

Re: Do you agree with linus regarding parallel computing?

Post by embryo »

Brendan wrote:For the "soon to be released" Xeon Phi (Knight's Landing), Intel are putting 8 to 16 GiB of RAM in the chip. With up to 72 cores they needed the extra bandwidth that shifting the RAM gives.
Interesting. I haven't noticed this while was spotting some Intel docs (except more than 30 Mb of L2 cache). But my reading was just skin-deep.
Brendan wrote:If something like Java OS existed and had a non-zero market share, then maybe. Whether we like it or not, this isn't currently the case and unless a CPU manufacturer can convince someone like Microsoft, Apple or Xiomi to adopt their chips they're not going to get anywhere.
Yes. But world is changing and time for active search for new ways of doing computing is very close, just because Intel already hit the performance limits.
Brendan wrote:Desktop PCs are not loosing market share to smartphones.

For the PC market, everyone that wants one already has one; and PC sales growth slowed down because the performance difference between the old stuff and the new stuff is too little to justify the cost of upgrading (and recent versions of Windows haven't needed hardware upgrades). It had nothing to do with smartphones.
You've forgot about the tablets. It is absolutely enough to have such a beast for an ordinary internet user. Except may be an external keyboard for those who still don't like the on-screen variant.

And with promotion of speech recognition technologies even an office worker can forget about desktop PC. Pervasive Wi-Fi ensures communication, cloud services remove processing power requirement constraints from a new thin client, the input is speech based, a finger is a pointing device and all the tablet should to do is just to create some fancy drawings (and it's inherent parallelization allows simple processor design).
Brendan wrote:Smartphone sales are higher, partly because people were moving from "PC" to "PC and smartphone", partly because the newer hardware is noticeably better/faster than the older hardware, and partly because they get lost/broken far more often.
Don't overlook the progress. It is already here and you still think that "it will take years to get here". The world is changing.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Do you agree with linus regarding parallel computing?

Post by Brendan »

Hi,
embryo wrote:
Brendan wrote:If something like Java OS existed and had a non-zero market share, then maybe. Whether we like it or not, this isn't currently the case and unless a CPU manufacturer can convince someone like Microsoft, Apple or Xiomi to adopt their chips they're not going to get anywhere.
Yes. But world is changing and time for active search for new ways of doing computing is very close, just because Intel already hit the performance limits.
It is possible that some revolutionary breakthrough might occur; but all the foreseeable possibilities are either recycled old ideas that failed (asynchronous CPUs, wide architectures with static instruction scheduling, etc) or have remained as science fiction for decades without any progress towards becoming viable end products (e.g. quantum computing). Until there's evidence to the contrary, I remain sceptical.
embryo wrote:
Brendan wrote:Desktop PCs are not loosing market share to smartphones.

For the PC market, everyone that wants one already has one; and PC sales growth slowed down because the performance difference between the old stuff and the new stuff is too little to justify the cost of upgrading (and recent versions of Windows haven't needed hardware upgrades). It had nothing to do with smartphones.
You've forgot about the tablets. It is absolutely enough to have such a beast for an ordinary internet user. Except may be an external keyboard for those who still don't like the on-screen variant.
I didn't forget tablets - I just don't think they belong in the "desktop PC" category. They're closer to smartphones (in terms of weight, power and performance; and in terms of usage).
embryo wrote:And with promotion of speech recognition technologies even an office worker can forget about desktop PC. Pervasive Wi-Fi ensures communication, cloud services remove processing power requirement constraints from a new thin client, the input is speech based, a finger is a pointing device and all the tablet should to do is just to create some fancy drawings (and it's inherent parallelization allows simple processor design).
Use an image editor (e.g. MS Paint, GIMP, Photoshop) to draw a picture of your Mother using speech recognition alone. For a fairer test, create a nice looking 6 page document in a word-processor by using speech recognition to enter all the text and format it. After doing these things, come back and tell me how awesome speech recognition is. You will find that it's unusable - both error prone (and only suitable for situations where the vocabulary it needs to recognised is extremely limited) and significantly slower than keyboard/mouse. Now imagine 20+ people in a nice "quiet" office all yelling at their computers for 8 hours a day trying to get real work done (slowly).

"Cloud" is mostly a combination of dead technology (mainframe with dumb terminals, which made sense when processing time was extremely expensive and died because processing time became cheap) and marketing scam (force people to pay a monthly fee to do what they could've done without your online service).
embryo wrote:
Brendan wrote:Smartphone sales are higher, partly because people were moving from "PC" to "PC and smartphone", partly because the newer hardware is noticeably better/faster than the older hardware, and partly because they get lost/broken far more often.
Don't overlook the progress. It is already here and you still think that "it will take years to get here". The world is changing.
Funny graphs that show correlation between completely unrelated statistics.

In 1990 nobody had a smartphone, and in 2010 millions of people had smartphones. Based on "infinite growth" from 1990 to 2010, we can expect the entire universe to consist of nothing but smartphones before the year 2030 - they will replace PCs, and dirt, and all forms of life, and everything else. :roll:


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Do you agree with linus regarding parallel computing?

Post by Combuster »

Brendan wrote:In 1990 nobody had a smartphone, and in 2010 millions of people had smartphones. Based on "infinite growth" from 1990 to 2010, we can expect the entire universe to consist of nothing but smartphones before the year 2030 - they will replace PCs, and dirt, and all forms of life, and everything else.
There's one prediction for artificial intelligence :mrgreen:
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
embryo

Re: Do you agree with linus regarding parallel computing?

Post by embryo »

Brendan wrote:It is possible that some revolutionary breakthrough might occur; but all the foreseeable possibilities are either recycled old ideas that failed (asynchronous CPUs, wide architectures with static instruction scheduling, etc) or have remained as science fiction for decades without any progress towards becoming viable end products (e.g. quantum computing). Until there's evidence to the contrary, I remain sceptical.
Only time can tell us "who is who" in a reliable manner.
Brendan wrote:I didn't forget tablets - I just don't think they belong in the "desktop PC" category. They're closer to smartphones (in terms of weight, power and performance; and in terms of usage).
Yes, they are close brothers of smatphones, but the point is they can replace desktop PC for many users. And they actually are replacing the PCs. You can ask your friends about their usage experience. It is really simple to use a tablet in bed or in another "not very comfortable" for desktops or laptops place or position. Tablet's interface is more pleasant, there's no need for a mouse. And you can just leave the tablet somewhere without any switching or folding for a long time and find it instantly ready a day later, just like a book or another small thing. It's just more convenient and it's price is lower than a price of a similarly powerful (for internet usage) laptop.
Brendan wrote:Use an image editor (e.g. MS Paint, GIMP, Photoshop) to draw a picture of your Mother using speech recognition alone.
If there are some unsuitable situations for the speech recognition it just doesn't mean there is no suitable situation for it.
Brendan wrote:For a fairer test, create a nice looking 6 page document in a word-processor by using speech recognition to enter all the text and format it.
Entering text is absolutely possible. Formatting it with touch interface is convenient. Correcting recognition errors by touching a word and spelling it once more is also an existing reality. It's the lack of market penetration of such technologies that prevents majority of users to switch to such forms of an interface, but technically all technologies are ready and they work well. The overall speed of completing such work as a document editing still can be questioned, but I attribute such question to the lack of market penetration, when a useful technology can not evolve in a mature one just because there are too little efforts for it to be elaborated as required. But the world is moving and technologies are maturing every day, so it is perfectly possible to have a mature technology (more convenient than a keyboard based variant) in a few years. However, the market inertia can play a bad game here.
Brendan wrote:Now imagine 20+ people in a nice "quiet" office all yelling at their computers for 8 hours a day trying to get real work done (slowly).
I'm remembering the call servicing room full of talking operators. It's not very noisy. I can compare it with an open space offices, maybe open space is still softer, but not very significantly. However, for people who like to work in a silence, it is not very convenient, but it's comparable with open space office problems.
Brendan wrote:"Cloud" is mostly a combination of dead technology (mainframe with dumb terminals, which made sense when processing time was extremely expensive and died because processing time became cheap) and marketing scam (force people to pay a monthly fee to do what they could've done without your online service).
Cloud is a heterogeneous server environment. It's more than a mainframe because it's about a lot of mainframes. And some mainframes can be different. As a result we can cut our costs. So, the cloud is just cheaper due to the effect of scale. But if you prefer to pay for your personal server and it's power consumption - go ahead and you will find yourself in a situation when your spendings are significantly bigger than of your competitors. The effect of scale is a powerful thing and should be considered by any enterprise.
Correlation between margarine and divorce rate can be explained as a correlation with stability. When life is more stable people become relaxed and lazy. Divorce rate decreases. And instead of buying margarine people prefer to buy butter. So, the "completely-unrelated-stats" name is just misleading.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Do you agree with linus regarding parallel computing?

Post by Rusky »

embryo wrote:
Correlation between margarine and divorce rate can be explained as a correlation with stability. When life is more stable people become relaxed and lazy. Divorce rate decreases. And instead of buying margarine people prefer to buy butter. So, the "completely-unrelated-stats" name is just misleading.
You missed the point- there is no explanation for that correlation. The whole point of that page (and its source, http://www.tylervigen.com/) is that many things that have nothing to do with each other can appear correlated, so you need to be more careful with the conclusions you draw, rot just draw ever-more-wild conclusions to explain them.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Do you agree with linus regarding parallel computing?

Post by Brendan »

Hi,
embryo wrote:
Brendan wrote:I didn't forget tablets - I just don't think they belong in the "desktop PC" category. They're closer to smartphones (in terms of weight, power and performance; and in terms of usage).
Yes, they are close brothers of smatphones, but the point is they can replace desktop PC for many users. And they actually are replacing the PCs. You can ask your friends about their usage experience. It is really simple to use a tablet in bed or in another "not very comfortable" for desktops or laptops place or position. Tablet's interface is more pleasant, there's no need for a mouse. And you can just leave the tablet somewhere without any switching or folding for a long time and find it instantly ready a day later, just like a book or another small thing. It's just more convenient and it's price is lower than a price of a similarly powerful (for internet usage) laptop.
So, tablets are replacing books, but smartphones aren't replacing PCs?
embryo wrote:
Brendan wrote:Use an image editor (e.g. MS Paint, GIMP, Photoshop) to draw a picture of your Mother using speech recognition alone.
If there are some unsuitable situations for the speech recognition it just doesn't mean there is no suitable situation for it.
Brendan wrote:For a fairer test, create a nice looking 6 page document in a word-processor by using speech recognition to enter all the text and format it.
Entering text is absolutely possible. Formatting it with touch interface is convenient. Correcting recognition errors by touching a word and spelling it once more is also an existing reality. It's the lack of market penetration of such technologies that prevents majority of users to switch to such forms of an interface, but technically all technologies are ready and they work well. The overall speed of completing such work as a document editing still can be questioned, but I attribute such question to the lack of market penetration, when a useful technology can not evolve in a mature one just because there are too little efforts for it to be elaborated as required. But the world is moving and technologies are maturing every day, so it is perfectly possible to have a mature technology (more convenient than a keyboard based variant) in a few years. However, the market inertia can play a bad game here.
For entering text, see how quickly you can enter "there their they're" correctly using speech recognition. If you're saying that speech recognition is bad in most situations and you have to fall back to something else (like touch) to work around the problem, then I agree.
embryo wrote:
Brendan wrote:"Cloud" is mostly a combination of dead technology (mainframe with dumb terminals, which made sense when processing time was extremely expensive and died because processing time became cheap) and marketing scam (force people to pay a monthly fee to do what they could've done without your online service).
Cloud is a heterogeneous server environment. It's more than a mainframe because it's about a lot of mainframes. And some mainframes can be different. As a result we can cut our costs. So, the cloud is just cheaper due to the effect of scale. But if you prefer to pay for your personal server and it's power consumption - go ahead and you will find yourself in a situation when your spendings are significantly bigger than of your competitors. The effect of scale is a powerful thing and should be considered by any enterprise.
Sounds like a whole lot of wishful thinking to me. There is almost nothing that people used to do on desktop PCs that has been shifted to "cloud". The only things that have been shifted to "cloud" are things that were already done on remote servers (e.g. HTTP and FTP servers, databases, etc), and a very small amount of processing for lame systems (e.g. smartphones) that don't come close to the processing power of even the cheapest desktop CPU.

While you might be able to get server hardware cheaper, by the time you add additional expenses ("cloud company" employee wages, "cloud company" advertising, etc) then add "cloud company profit", then add the cost of networking needed to get the data to/from the cloud; it ends up more expensive than doing the processing and storage locally. It's also slower (due to networking latency), and less reliable (e.g. your Internet connection drops out and you've lost access to all your data, the "cloud company" you were relying on goes bankrupt and you've lost all your data, etc). In addition, few people want to have to trust large companies with their personal data.

Essentially, for almost everything that people use desktop PCs for, "cloud" is more expensive, slower, less reliable and less trustworthy.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo

Re: Do you agree with linus regarding parallel computing?

Post by embryo »

Brendan wrote:So, tablets are replacing books, but smartphones aren't replacing PCs?
Smartphones target a different niche. It is the small screen and too little volume for a powerful processor and accumulator. But tablets are much bigger and have the volume for a hardware, that is acceptable for modern internet surfing (which is mostly freaked with a lot of useless JavaScript like in those advertising IFrames that scroll and twist different crap on your page while consuming a biggest part of the processor and GPU power and system's memory).
Brendan wrote:For entering text, see how quickly you can enter "there their they're" correctly using speech recognition. If you're saying that speech recognition is bad in most situations and you have to fall back to something else (like touch) to work around the problem, then I agree.
There are examples on the net and you can try them online (but often they require Chrome browser of some modern version). But if you pretend that a typical word processor task is about something like "there their they're", then yes, the speech recognition is still not ready for your requirements.
Brendan wrote:There is almost nothing that people used to do on desktop PCs that has been shifted to "cloud". The only things that have been shifted to "cloud" are things that were already done on remote servers (e.g. HTTP and FTP servers, databases, etc), and a very small amount of processing for lame systems (e.g. smartphones) that don't come close to the processing power of even the cheapest desktop CPU.
Then you should study any preferred multitiered client-server application. They typically perform a lot of processing AFTER a data is got from a database. And a typical client here is an ordinary web-browser, which just visualizes data and receives user's actions. Web servers also perform some processing in form of client request management and underlaying layer invocation. Also there is a lot of communication logic on the server side (all those web-services, message queues and so on). Add here a lot of security applications (like firewalls and cryptography servers) and you will get an interesting picture of hundreds of server side agents working for a middle class enterprise. And it's all about shifting processing from client PCs to the enterprise server network, which gradually transforms into a form of a cloud that is still misunderstood by many people.
Brendan wrote:While you might be able to get server hardware cheaper, by the time you add additional expenses ("cloud company" employee wages, "cloud company" advertising, etc) then add "cloud company profit", then add the cost of networking needed to get the data to/from the cloud; it ends up more expensive than doing the processing and storage locally.
Economically it is about cutting enterprise costs and sharing enterprise's profit. If you imagine that your enterprise is offered a way to decrease costs by 1 million $ per year, but in exchange for paying 950 000 $ to a cloud owning company, what decision would you make? And here every participant has it's 50 000 $ just because of the economy of scale. Every enterprise cuts 50 000 $ from it's costs and cloud company gets 50 000 from, say, every 10 clients. And of course, it is a big picture with hidden costs redistributed in a not very clear manner, but after some period of market adaptation of the new way of cutting costs, most of the hidden costs will be clear and there will be nothing more preventing a cloud approach from delivering it's power to every enterprise.
Brendan wrote:In addition, few people want to have to trust large companies with their personal data.
Oh, yes, if we trust all those facebooks and even share some intimate photos or information, then of course we should think very cautiously when we are going to buy a flight or to book a hotel room, just because they are asking us about our card number (where you typically hold much less than 1000$ or even use some virtual card for internet payments only).
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Do you agree with linus regarding parallel computing?

Post by Brendan »

Hi,
embryo wrote:
Brendan wrote:For entering text, see how quickly you can enter "there their they're" correctly using speech recognition. If you're saying that speech recognition is bad in most situations and you have to fall back to something else (like touch) to work around the problem, then I agree.
There are examples on the net and you can try them online (but often they require Chrome browser of some modern version). But if you pretend that a typical word processor task is about something like "there their they're", then yes, the speech recognition is still not ready for your requirements.
The problem with English (I'm not sure about other languages) is that some words sound the exactly the same (there, their, they're; or, oar, ore; weather, whether; etc), and there's even more words sound similar enough that even though they are different they still confuse speech recognition (sores, source), and on top of that different people have different accents. The end result is a high chance of speech recognition guessing wrong (and the user having to stop and spend time to correct it).

The other problem is that you need "command words". For example, maybe the system accepts "next window" to switch to the next window (same as "alt+tab" with keyboard) and that means you can't enter a sentence like "Jim cleaned the first window quickly, but the next window took longer".

If the vocabulary it needs to recognise is much smaller (e.g. a small number of command words only) and those words are carefully chosen words that don't sound similar; then it can work (but keyboard, mouse and/or touch will still be faster). Note: a good typist can type 80 words per minute (or 80*6/60 = 8 characters per second), but with a small number of commands you can assign one command per key and in that case the same typist might reach 8 commands per second (or higher if the keys are chosen well - e.g. think of how quickly a hard-core gamer would be able to press the WASD keys during gameplay). Even for the shortest command words few people are able to speak that fast.

Mostly, speech recognition is only the best option if touch can't be used - e.g. you're driving a car, you're a quadriplegic, etc.
embryo wrote:
Brendan wrote:There is almost nothing that people used to do on desktop PCs that has been shifted to "cloud". The only things that have been shifted to "cloud" are things that were already done on remote servers (e.g. HTTP and FTP servers, databases, etc), and a very small amount of processing for lame systems (e.g. smartphones) that don't come close to the processing power of even the cheapest desktop CPU.
Then you should study any preferred multitiered client-server application. They typically perform a lot of processing AFTER a data is got from a database. And a typical client here is an ordinary web-browser, which just visualizes data and receives user's actions. Web servers also perform some processing in form of client request management and underlaying layer invocation. Also there is a lot of communication logic on the server side (all those web-services, message queues and so on). Add here a lot of security applications (like firewalls and cryptography servers) and you will get an interesting picture of hundreds of server side agents working for a middle class enterprise. And it's all about shifting processing from client PCs to the enterprise server network, which gradually transforms into a form of a cloud that is still misunderstood by many people.
All of this has been around (on servers and not desktop PCs) for decades - it's not an example of something that was done on desktop PCs and got shifted to "cloud".
embryo wrote:
Brendan wrote:While you might be able to get server hardware cheaper, by the time you add additional expenses ("cloud company" employee wages, "cloud company" advertising, etc) then add "cloud company profit", then add the cost of networking needed to get the data to/from the cloud; it ends up more expensive than doing the processing and storage locally.
Economically it is about cutting enterprise costs and sharing enterprise's profit. If you imagine that your enterprise is offered a way to decrease costs by 1 million $ per year, but in exchange for paying 950 000 $ to a cloud owning company, what decision would you make? And here every participant has it's 50 000 $ just because of the economy of scale.
And then they spend 100 000 $ on the additional Internet bandwidth they need.

Where you live, do you have a toilet? Surely it'd be cheaper (due to economies of scale) to have shared public toilets (like, a single massive "toilet cloud" in the middle of the city). Does it matter that you'd have to spend 20 minutes travelling just to get to a toilet (or that there's a strange guy taking photos of you while you're doing your business), or that cost is only one factor to consider (convenience, speed, security, flexibility)?
embryo wrote:
Brendan wrote:In addition, few people want to have to trust large companies with their personal data.
Oh, yes, if we trust all those facebooks and even share some intimate photos or information, then of course we should think very cautiously when we are going to buy a flight or to book a hotel room, just because they are asking us about our card number (where you typically hold much less than 1000$ or even use some virtual card for internet payments only).
These are all examples of things that were never done on desktop PCs and always done with remote servers. Would you do your accountancy using a large company and upload all your financial details for them to store indefinitely (without strict rules that govern credit card information usage); given that even a cheap smartphone has enough processing and storage to do it anyway, and that (unlike social media and online shopping) giving data to anyone else isn't required?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo

Re: Do you agree with linus regarding parallel computing?

Post by embryo »

Brendan wrote:The problem with English (I'm not sure about other languages) is that some words sound the exactly the same (there, their, they're; or, oar, ore; weather, whether; etc)
In mandarin it's much worse :)
Brendan wrote:The end result is a high chance of speech recognition guessing wrong (and the user having to stop and spend time to correct it).
It is the context that is used to correct a guess. Next goes word combination statistics and many other methods. And as the end result we have even audio interpreter that translates speech in one language to another (like this).
Brendan wrote:The other problem is that you need "command words". For example, maybe the system accepts "next window" to switch to the next window (same as "alt+tab" with keyboard) and that means you can't enter a sentence like "Jim cleaned the first window quickly, but the next window took longer".
If you are in the text input mode then it is logical to disable any spoken commands. Just as simple.
Brendan wrote:Note: a good typist can type 80 words per minute (or 80*6/60 = 8 characters per second), but with a small number of commands you can assign one command per key and in that case the same typist might reach 8 commands per second (or higher if the keys are chosen well - e.g. think of how quickly a hard-core gamer would be able to press the WASD keys during gameplay). Even for the shortest command words few people are able to speak that fast.
But majority of people are not good typists. So, they accept the speed that is available today. But the most interesting part is about a technology evolution - if now it is acceptable for some uses, then tomorrow it will outperform any typist. Yes, we need some time for the technology to be mature enough to satisfy all needs, but today it has reached the acceptable for some uses level and text entering is now acceptable (while still not perfect).
Brendan wrote:
embryo wrote:Then you should study any preferred multitiered client-server application. They typically perform a lot of processing AFTER a data is got from a database. And a typical client here is an ordinary web-browser, which just visualizes data and receives user's actions. Web servers also perform some processing in form of client request management and underlaying layer invocation. Also there is a lot of communication logic on the server side (all those web-services, message queues and so on). Add here a lot of security applications (like firewalls and cryptography servers) and you will get an interesting picture of hundreds of server side agents working for a middle class enterprise. And it's all about shifting processing from client PCs to the enterprise server network, which gradually transforms into a form of a cloud that is still misunderstood by many people.
All of this has been around (on servers and not desktop PCs) for decades - it's not an example of something that was done on desktop PCs and got shifted to "cloud".
It's an example of "more than databases only" server world. And it shows the need for a lot of servers.
Brendan wrote:And then they spend 100 000 $ on the additional Internet bandwidth they need.
Middle level enterprises can afford to buy an office building, then why they can't afford to buy one fiber cable? And it's capacity is up to 1 petabit.
Brendan wrote:Where you live, do you have a toilet? Surely it'd be cheaper (due to economies of scale) to have shared public toilets (like, a single massive "toilet cloud" in the middle of the city). Does it matter that you'd have to spend 20 minutes travelling just to get to a toilet (or that there's a strange guy taking photos of you while you're doing your business), or that cost is only one factor to consider (convenience, speed, security, flexibility)?
It is often the case when there is only one toilet in an apartment. So, can you imagine when many kids are having so much trouble when they need to line up near the toilet? But for some reason the toilet is still just one for a whole apartment. And the analogy here is very clear when we replace kids with businesses and the toilet with a cloud.
Brendan wrote:Would you do your accountancy using a large company and upload all your financial details for them to store indefinitely (without strict rules that govern credit card information usage); given that even a cheap smartphone has enough processing and storage to do it anyway, and that (unlike social media and online shopping) giving data to anyone else isn't required?
The strict rules are just matter of an agreement. Today it is a non-disclosure agreement that has a very suitable pattern for such situation. Is it so complex to sign an agreement?

And smartphone's power is not enough for the accounting data processing of a middle level enterprise. Also, the accounting is just a small area if compared with business data like OLTP or SCM or CRM or many other types.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Do you agree with linus regarding parallel computing?

Post by Brendan »

Hi,
embryo wrote:
Brendan wrote:The other problem is that you need "command words". For example, maybe the system accepts "next window" to switch to the next window (same as "alt+tab" with keyboard) and that means you can't enter a sentence like "Jim cleaned the first window quickly, but the next window took longer".
If you are in the text input mode then it is logical to disable any spoken commands. Just as simple.
How do you get out of "text input mode" without using a different input device? "hello world done stop dammit stop I want to get out of text input mode now you stupid piece of crud oh for the love of god it's still going and now my document is ruined delete delete delete is there a way to remove all this text screw it I'll stop it with a hammer and chuck this garbage in trash where it belongs".
embryo wrote:
Brendan wrote:Note: a good typist can type 80 words per minute (or 80*6/60 = 8 characters per second), but with a small number of commands you can assign one command per key and in that case the same typist might reach 8 commands per second (or higher if the keys are chosen well - e.g. think of how quickly a hard-core gamer would be able to press the WASD keys during gameplay). Even for the shortest command words few people are able to speak that fast.
But majority of people are not good typists. So, they accept the speed that is available today. But the most interesting part is about a technology evolution - if now it is acceptable for some uses, then tomorrow it will outperform any typist. Yes, we need some time for the technology to be mature enough to satisfy all needs, but today it has reached the acceptable for some uses level and text entering is now acceptable (while still not perfect).
Yes, most smartphone users accept the speed of touch today (which is far slower than using a full sized keyboard), because even a tiny and awkward on-screen keyboard is still better than speech.
embryo wrote:Middle level enterprises can afford to buy an office building, then why they can't afford to buy one fiber cable? And it's capacity is up to 1 petabit.
And then they plug one end of the optical fibre into a USB port and the other end into a passing cloud; and get 1 petabit of upload/download speed without anyone charging them "$x per MiB of data" and without buying more expensive/higher speed networking cards, switches, cabling, etc?

Middle level enterprises can afford to send me $12345 per week for nothing. Just because they can afford it, doesn't mean they're stupid enough to waste money for no reason.
embryo wrote:
Brendan wrote:Where you live, do you have a toilet? Surely it'd be cheaper (due to economies of scale) to have shared public toilets (like, a single massive "toilet cloud" in the middle of the city). Does it matter that you'd have to spend 20 minutes travelling just to get to a toilet (or that there's a strange guy taking photos of you while you're doing your business), or that cost is only one factor to consider (convenience, speed, security, flexibility)?
It is often the case when there is only one toilet in an apartment. So, can you imagine when many kids are having so much trouble when they need to line up near the toilet? But for some reason the toilet is still just one for a whole apartment. And the analogy here is very clear when we replace kids with businesses and the toilet with a cloud.
So the "cloud" is a private server shared by 1 to 5 computers?
embryo wrote:
Brendan wrote:Would you do your accountancy using a large company and upload all your financial details for them to store indefinitely (without strict rules that govern credit card information usage); given that even a cheap smartphone has enough processing and storage to do it anyway, and that (unlike social media and online shopping) giving data to anyone else isn't required?
The strict rules are just matter of an agreement. Today it is a non-disclosure agreement that has a very suitable pattern for such situation. Is it so complex to sign an agreement?

And smartphone's power is not enough for the accounting data processing of a middle level enterprise. Also, the accounting is just a small area if compared with business data like OLTP or SCM or CRM or many other types.
Are you even capable of keeping within context? Middle level enterprises have always used servers for accounting and not desktop PCs, so for that case "cloud" didn't replace anything that was done on desktop PCs. For home/personal use (what I was actually talking about) it was/is done on desktop PCs and nobody uses cloud, so for that case "cloud" still hasn't replaced anything that was done on desktop PCs (and cheap smartphones are powerful enough for that case).

Given the choice between signing an agreement and not signing an agreement, I doubt I'd sign an agreement.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo

Re: Do you agree with linus regarding parallel computing?

Post by embryo »

Brendan wrote:How do you get out of "text input mode" without using a different input device?
Well, I will use different input device. Is it a problem? After having my text recognized quicker than I can type I see no problem in a finishing gesture.
Brendan wrote:Yes, most smartphone users accept the speed of touch today (which is far slower than using a full sized keyboard), because even a tiny and awkward on-screen keyboard is still better than speech.
It's about "I think it's impossible". Have you tried at least online demos? Read this test result if you don't want to test it yourself.
Brendan wrote:And then they plug one end of the optical fibre into a USB port and the other end into a passing cloud; and get 1 petabit of upload/download speed without anyone charging them "$x per MiB of data" and without buying more expensive/higher speed networking cards, switches, cabling, etc?
The infrastructure, that is capable of transmitting all required data, is already there. An enterprise needs just to change a connection from it's servers to the cloud. If there is a need for combining data flows of more than 1 Gbit (a standard card's capacity) then it will cost an enterprise too little if compared with benefits (50000$ per year). It is almost the same on the cloud's end. Next, a modern city infrastructure includes those petabyte enabled fibers and all that an enterprise needs is the last mile connection which is really cheap if compared with benefits. So, it seems that it's again about "I just don't believe it".
Brendan wrote:Middle level enterprises can afford to send me $12345 per week for nothing. Just because they can afford it, doesn't mean they're stupid enough to waste money for no reason.
If you offer a benefit of 50000 per year then any enterprise would be ready to pay you 12345$.
Brendan wrote:So the "cloud" is a private server shared by 1 to 5 computers?
No, it's about hundreds or even thousands.
Brendan wrote:Middle level enterprises have always used servers for accounting and not desktop PCs, so for that case "cloud" didn't replace anything that was done on desktop PCs.
Cloud replaces enterprise's servers, but not desktops. And next desktops can be replaced with tablets.
Brendan wrote:For home/personal use (what I was actually talking about) it was/is done on desktop PCs and nobody uses cloud, so for that case "cloud" still hasn't replaced anything that was done on desktop PCs (and cheap smartphones are powerful enough for that case).
Home/personal usage is about your personal interest in the services, that a web company offers to you. And it is just invisible for you what actually they are using, a cloud or their own servers.
Brendan wrote:Given the choice between signing an agreement and not signing an agreement, I doubt I'd sign an agreement.
Enterprises are different here. They already has a problem - their employees (nobody trusts nobody in this world). It makes difference for an enterprise to sign an agreement with one financially weighty entity instead of many employees. So, the whole bunch of programmers and administrators can be offloaded from the security department care if an enterprise signs an agreement with a cloud provider.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Do you agree with linus regarding parallel computing?

Post by Brendan »

Hi,
embryo wrote:
Brendan wrote:How do you get out of "text input mode" without using a different input device?
Well, I will use different input device. Is it a problem? After having my text recognized quicker than I can type I see no problem in a finishing gesture.
After using slower and much more error-prone speech recognition to input the text, only to find that faster/better input devices had to exist anyway, I think I know what my "finishing gesture" will be.
embryo wrote:
Brendan wrote:Yes, most smartphone users accept the speed of touch today (which is far slower than using a full sized keyboard), because even a tiny and awkward on-screen keyboard is still better than speech.
It's about "I think it's impossible". Have you tried at least online demos?
I'm only saying exactly what everyone elsewhere is saying (an example).

I think it's just wishful thinking on your part. Have you actually bothered to look at any of the research into speech recognition error rates?
embryo wrote:url=http://www.slate.com/articles/technology/technology/2014/04/the_end_of_typing_speech_recognition_technology_is_getting_better_and_better.html]Read this test result[/url] if you don't want to test it yourself.
You're deluded. From the article you linked to:
  • Sorry, that should have been “weren’t.”
  • (Sorry that was supposed to be “dismissed” not “just missed.” And “vaunted” not “haunted.”)
  • How men’s are still problematic, for one thing. I mean homonyms are still problematic
That's 4 errors in 4 paragraphs. The author also says:
  • homonyms are still problematic
  • if you want punctuation marks, you have to speak them out loud
  • I’m going to go back to typing on my laptop now, both because I need my notes and because I’m sure both you and my editor are tired of the typos.
Basically, it was crap so he gave up and switched to keyboard after only 5 paragraphs.
embryo wrote:
Brendan wrote:And then they plug one end of the optical fibre into a USB port and the other end into a passing cloud; and get 1 petabit of upload/download speed without anyone charging them "$x per MiB of data" and without buying more expensive/higher speed networking cards, switches, cabling, etc?
The infrastructure, that is capable of transmitting all required data, is already there. An enterprise needs just to change a connection from it's servers to the cloud. If there is a need for combining data flows of more than 1 Gbit (a standard card's capacity) then it will cost an enterprise too little if compared with benefits (50000$ per year). It is almost the same on the cloud's end. Next, a modern city infrastructure includes those petabyte enabled fibers and all that an enterprise needs is the last mile connection which is really cheap if compared with benefits. So, it seems that it's again about "I just don't believe it".
More delusions. A typically company will run cheap (~1 gigabit) lines to cheap (~ 1 gigabit) switches *if* you're lucky and it's not old equipment they installed 10 years ago. They're not going to rip all of it out and replace it with higher speed links to higher speed switches just so some silly office workers can uninstall the word processor they've already got and use "cloud" for their word processing instead.
embryo wrote:
Brendan wrote:Middle level enterprises can afford to send me $12345 per week for nothing. Just because they can afford it, doesn't mean they're stupid enough to waste money for no reason.
If you offer a benefit of 50000 per year then any enterprise would be ready to pay you 12345$.
Right - companies do whatever makes sense (and therefore won't shift to idiotic "cloud" nonsense that doesn't make sense).
embryo wrote:
Brendan wrote:So the "cloud" is a private server shared by 1 to 5 computers?
No, it's about hundreds or even thousands.
So do you have a single "cloud toilet" in your city that is shared by hundreds or thousands of people (because "economies of scale" makes it cheaper in theory)?
embryo wrote:
Brendan wrote:Middle level enterprises have always used servers for accounting and not desktop PCs, so for that case "cloud" didn't replace anything that was done on desktop PCs.
Cloud replaces enterprise's servers, but not desktops. And next desktops can be replaced with tablets.
Brendan wrote:For home/personal use (what I was actually talking about) it was/is done on desktop PCs and nobody uses cloud, so for that case "cloud" still hasn't replaced anything that was done on desktop PCs (and cheap smartphones are powerful enough for that case).
Home/personal usage is about your personal interest in the services, that a web company offers to you. And it is just invisible for you what actually they are using, a cloud or their own servers.
Brendan wrote:Given the choice between signing an agreement and not signing an agreement, I doubt I'd sign an agreement.
Enterprises are different here. They already has a problem - their employees (nobody trusts nobody in this world). It makes difference for an enterprise to sign an agreement with one financially weighty entity instead of many employees. So, the whole bunch of programmers and administrators can be offloaded from the security department care if an enterprise signs an agreement with a cloud provider.
Basically what you're saying here is that "cloud" is *not* replacing desktop PCs.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo

Re: Do you agree with linus regarding parallel computing?

Post by embryo »

Brendan wrote:I'm only saying exactly what everyone elsewhere is saying (an example).
The accuracy is not a stopper here. While it exists (and always will, just as one human sometime can not recognize a word from another human), it's importance is overstated by you.
Brendan wrote:
embryo wrote:url=http://www.slate.com/articles/technology/technology/2014/04/the_end_of_typing_speech_recognition_technology_is_getting_better_and_better.html]Read this test result[/url] if you don't want to test it yourself.
You're deluded. From the article you linked to:
  • Sorry, that should have been “weren’t.”
  • (Sorry that was supposed to be “dismissed” not “just missed.” And “vaunted” not “haunted.”)
  • How men’s are still problematic, for one thing. I mean homonyms are still problematic
That's 4 errors in 4 paragraphs. The author also says:
  • homonyms are still problematic
  • if you want punctuation marks, you have to speak them out loud
  • I’m going to go back to typing on my laptop now, both because I need my notes and because I’m sure both you and my editor are tired of the typos.
Basically, it was crap so he gave up and switched to keyboard after only 5 paragraphs.
Well, you have missed the conclusion:
Still, I wouldn’t have dreamed of trying to compose even a brief work-related email on a smartphone by voice just a couple of years ago, let alone a full-length Slate column
And also you have missed the whole point of the article - it's now absolutely possible to dictate your texts while walking. Not just record an audio, but to get it translated into a form of text. Just imagine how cool it could be for many people when they were able to walk somewhere and dictate their impression without any keyboard and any distraction from the walking pleasure. And next, after some corrections, of course, they are able to post the text as a blog or as a message in a social network when their impressions almost INSTANTLY will be shared and talked and will gather new impressions of a discussed impressions (second derivative). And it is absolutely useful, it is working, it is accessible, it is here and now. But you still refuse to see it.

And by the way, the following is just another your miss:
Sorry that was supposed to be “dismissed” not “just missed.”
Such mistakes are corrected on the fly, just as the author says:
Sometimes you can actually see the software recalibrating on the fly. Recently I told my Google app, “Remind me to email Ben at 4 o’clock.” At first I saw it type, “Remind me to email Bennett.” But when it heard the words “4 o’clock,” it realized I had more likely said “Ben at” then “Bennett,” and it duly set the proper reminder.
So, the "homonyms are still problematic" part is actually much better than you have got it from a few first lines of the article.
Brendan wrote:A typically company will run cheap (~1 gigabit) lines to cheap (~ 1 gigabit) switches *if* you're lucky and it's not old equipment they installed 10 years ago.
Ok, but it means there's no need for the speed of more than 100 mbit. Then the cloud communication overhead will be at least 10 times less important than you have expected.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Do you agree with linus regarding parallel computing?

Post by Brendan »

Hi,
embryo wrote:
Brendan wrote:Basically, it was crap so he gave up and switched to keyboard after only 5 paragraphs.
Well, you have missed the conclusion:
Still, I wouldn’t have dreamed of trying to compose even a brief work-related email on a smartphone by voice just a couple of years ago, let alone a full-length Slate column
I didn't miss the conclusion. A value can be extremely negative (e.g. -10000) and increase drastically and yet still be negative (e.g. -333). In the same way, it's entirely possible for something that's extremely bad to improve drastically and yet still be bad.
embryo wrote:
Brendan wrote:A typically company will run cheap (~1 gigabit) lines to cheap (~ 1 gigabit) switches *if* you're lucky and it's not old equipment they installed 10 years ago.
Ok, but it means there's no need for the speed of more than 100 mbit. Then the cloud communication overhead will be at least 10 times less important than you have expected.
I'm not sure if you've ever seen a network before; but often you have more than one computer connected to an ethernet switch (in fact, there's no point having a switch when there's only 1 computer). That 1 gigabit uplink that 20+ computers have been sharing for years without any problem at all, is now a major bottleneck because you've got 20+ computers using "the cloud" for no sane reason whatsoever. Then one guy decides to download a copy of LibreOffice and 19+ people start getting 4 seconds of lag for their word processor. Yay.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply