I am preparing to do some presentation, found following list from ccna to be interesting;
data center 1.0 - 1960 - mainframe
data center 2.0 - 1980 - low-end servers
data center 3.0 - 2000 - virtualization
Now I have to say this is oversimplification because for example, various aspects, elements, components of data center virtualization happened during the span of around 1952-2013 and it looks like 2000 has been chosen as a median point or the point it started to pickup mainstream.
Then I made up these two to wonder if my points has any validity:
data center 4.0 - 2004 - hyperconvergence (lego buliding block, approach to data center configuration, expansion)
data center 5.0 - future?
it is either:
Google data center (AI, water cooling)
Etherium (de-centralized virtual machine)
data center eras
-
- Member
- Posts: 396
- Joined: Wed Nov 18, 2015 3:04 pm
- Location: San Jose San Francisco Bay Area
- Contact:
data center eras
key takeaway after spending yrs on sw industry: big issue small because everyone jumps on it and fixes it. small issue is big since everyone ignores and it causes catastrophy later. #devilisinthedetails
- Kazinsal
- Member
- Posts: 559
- Joined: Wed Jul 13, 2011 7:38 pm
- Libera.chat IRC: Kazinsal
- Location: Vancouver
- Contact:
Re: data center eras
The disconnect of storage and compute is a really big thing that people often overlook when thinking about what comprises a modern datacentre. Your CPU and RAM for your virtualization environment is often going to be in completely separate physical host machines than your disk space, and your disk space is likely going to be collected and partitioned into various storage pools based on availability, redundancy, speed, etc.
-
- Member
- Posts: 396
- Joined: Wed Nov 18, 2015 3:04 pm
- Location: San Jose San Francisco Bay Area
- Contact:
Re: data center eras
yes, i think it is good point. although the name says hyperconvergence (which most places advertise as compute, storage and network to a single node) it looks like a convergence of storage to a compute and network.Kazinsal wrote:The disconnect of storage and compute is a really big thing that people often overlook when thinking about what comprises a modern datacentre. Your CPU and RAM for your virtualization environment is often going to be in completely separate physical host machines than your disk space, and your disk space is likely going to be collected and partitioned into various storage pools based on availability, redundancy, speed, etc.
SAN/NAS -> local-disk drive.
But I am not sure on this part though, according to what yours: CPU and RAM is going completely going opposite? (diverging)? Because the CPU and RAM are the only the components that are not virtualized (or translated right?) Everything else network cards, graphics are virtualized and represented by software. (Of course are exception i.e. SRIOV/VDI which is coming back to hardware for performance but lets put ones like this outside the scope).
Your CPU and RAM for your virtualization environment is often going to be in completely separate physical host machines than your disk space, and your disk space is likely going to be collected and partitioned into various storage pools based on availability, redundancy, speed, etc.
key takeaway after spending yrs on sw industry: big issue small because everyone jumps on it and fixes it. small issue is big since everyone ignores and it causes catastrophy later. #devilisinthedetails
- Kazinsal
- Member
- Posts: 559
- Joined: Wed Jul 13, 2011 7:38 pm
- Libera.chat IRC: Kazinsal
- Location: Vancouver
- Contact:
Re: data center eras
The idea is that you separate your compute resources from your storage resources, both physically and logically, now that we have commercially-available extremely high bandwidth links (eg. 10 Gigabit Ethernet, 10Gig + LACP, 40 Gigabit Ethernet, Fibre Channel) that we can use to link huge arrays of mass storage (storage area networks) to dozens of clustered compute units (each composed of a CPU and some amount of RAM, often split into dedicated control and shared virtualization memory).
When you're working in the land of 10 gigabits per second and higher, you don't need to have your storage physically present alongside your compute.
When you're working in the land of 10 gigabits per second and higher, you don't need to have your storage physically present alongside your compute.
-
- Member
- Posts: 396
- Joined: Wed Nov 18, 2015 3:04 pm
- Location: San Jose San Francisco Bay Area
- Contact:
Re: data center eras
hmm, i am afraid it is going opposite. Yes it used to be or still that way: SAN/NAS storage separate from computing so that multiple servers can access it. However I think this is becoming problematic when you need to expand fast, deploy fast and configure fast etc,Kazinsal wrote:The idea is that you separate your compute resources from your storage resources, both physically and logically, now that we have commercially-available extremely high bandwidth links (eg. 10 Gigabit Ethernet, 10Gig + LACP, 40 Gigabit Ethernet, Fibre Channel) that we can use to link huge arrays of mass storage (storage area networks) to dozens of clustered compute units (each composed of a CPU and some amount of RAM, often split into dedicated control and shared virtualization memory).
When you're working in the land of 10 gigabits per second and higher, you don't need to have your storage physically present alongside your compute.
So with hyperconvergence, it is coming back to local-disk again. However storage is still redundant of course and data HA is handled by software defined storage so any node(server) in the data center goes south along with compute/network/storage one or more copy is always someplace else. This way data center configuratoin/expand/contract can happen fast and become lego like.
https://www.youtube.com/watch?v=mGpGG_6l38k
key takeaway after spending yrs on sw industry: big issue small because everyone jumps on it and fixes it. small issue is big since everyone ignores and it causes catastrophy later. #devilisinthedetails