Page 13 of 14

Re: Concise Way to Describe Colour Spaces

Posted: Tue Aug 04, 2015 2:19 pm
by Rusky
There is certainly an element of "same tired old crud" in the games industry, but trying to cramming every game into the box of a fully simulated world with unlimited attention to realism is the whole reason for that problem in the first place. Many of the best games of all time succeed not in spite of these "problems" but because of them. For example:
  • "freeze everywhere in the world except where the player is"- platformers controlling exact enemy placement when the character arrives
  • "max. limits on the number of entities"- RTS resource and unit type limits
  • "suddenly become mature in an instant or never age at all"/"flame-throwers being used in book shops where nothing catches fire"- carefully planned but mostly-indestructible level design can be more fun and immersive than "oh lol we can make everything destructible" level design, unless "destroy everything" is the point of the game
  • "medical miracles"- fast-action FPS tactics, RPG spell and item tactics
  • "limited to the size of a small town"- storytelling in games like Portal, or even smaller areas like in Gone Home, or even smaller areas like Tetris
Like I said before, making it possible to remove these limits is good, but forcing their removal is not. Competent game designers will just add the limits back, until they want to create a specific game mechanic. There's the obvious "ooh open, living world" one, but that's not for every game. There are other reasons- gradual growth, persistent wounds, realistic space navigation, and destructible terrain have all been the basis of great games that break all the other rules of "realism."

Re: Concise Way to Describe Colour Spaces

Posted: Tue Aug 04, 2015 5:04 pm
by Brendan
Hi,
Rusky wrote:There is certainly an element of "same tired old crud" in the games industry, but trying to cramming every game into the box of a fully simulated world with unlimited attention to realism is the whole reason for that problem in the first place. Many of the best games of all time succeed not in spite of these "problems" but because of them. For example:
  • "freeze everywhere in the world except where the player is"- platformers controlling exact enemy placement when the character arrives
Dubious at best - I doubt any normal player has said "this game is awesome because enemies are always predictable".
Rusky wrote:
  • "max. limits on the number of entities"- RTS resource and unit type limits
There's a difference between (e.g.) limiting resources (and therefore limiting units) to make the game more challenging, and limiting units simply because the CPU can't handle it.

For an example of the latter, in Cities:Skylines there's a limit for the number of units/traffic that can exist at any point in time. For a medium size city you get up the limit and have traffic problems, and as the city gets larger and you have a higher population the same/limited amount of traffic gets spread over a larger area and you get less traffic problems. For a massive city you can forget about fancy highways and complicated interchanges, and just slap a grid of 2-way roads everywhere.
Rusky wrote:
  • "suddenly become mature in an instant or never age at all"/"flame-throwers being used in book shops where nothing catches fire"- carefully planned but mostly-indestructible level design can be more fun and immersive than "oh lol we can make everything destructible" level design, unless "destroy everything" is the point of the game
I doubt this. I think a carefully planned level design can be more destructible and much more affectable when it's not destructible; and (some) game developers are just lazy. For example; those books could catch on fire, and burn for a while and end up blackened (and slowly return to normal after that, if necessary) and it'd be more immersive rather than less (even though the books still exist).
Rusky wrote:
  • "medical miracles"- fast-action FPS tactics, RPG spell and item tactics
  • "limited to the size of a small town"- storytelling in games like Portal, or even smaller areas like in Gone Home, or even smaller areas like Tetris
[/list]
If it's intentional, that's fine. Often it's not. For example, compare Minecraft on PC to Minecraft on Xbox 360 (or PlayStation 3) and see if you can determine which is crippled beyond belief due to hardware limitations.
Rusky wrote:Like I said before, making it possible to remove these limits is good, but forcing their removal is not. Competent game designers will just add the limits back, until they want to create a specific game mechanic. There's the obvious "ooh open, living world" one, but that's not for every game. There are other reasons- gradual growth, persistent wounds, realistic space navigation, and destructible terrain have all been the basis of great games that break all the other rules of "realism."
I'm not preventing games from having no limits where they're desirable, and I'm not even removing limits caused by "not enough CPU/memory" when they're not desirable. All I'm doing is shifting those limits. For example, rather than saying "one computer is only powerful enough to handle the AI for 10000 entities" developers can say "N computers are powerful enough to handle N * 10000 entities"; and rather than saying "Supporting books that burn and babies that grow is going to cost too much developer time" developers can say "We can use existing assets for books that burn and babies that grow and reduce the developer time".


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Tue Aug 04, 2015 6:21 pm
by Rusky
Brendan wrote:rather than saying "Supporting books that burn and babies that grow is going to cost too much developer time" developers can say "We can use existing assets for books that burn and babies that grow and reduce the developer time".
Back to my earlier point, no competent game developer is ever going to just pull in a common chicken asset, or a common burning book asset, or a common growing baby asset. They might leverage distributed computing to get the capability, but I guarantee they would handle the actual asset themselves.

At best your plan will create a kind of sandbox world with a community that creates the equivalent of in-depth mods for it. It will not affect how other games are made.

Re: Concise Way to Describe Colour Spaces

Posted: Tue Aug 04, 2015 8:24 pm
by Brendan
Hi,
Rusky wrote:
Brendan wrote:rather than saying "Supporting books that burn and babies that grow is going to cost too much developer time" developers can say "We can use existing assets for books that burn and babies that grow and reduce the developer time".
Back to my earlier point, no competent game developer is ever going to just pull in a common chicken asset, or a common burning book asset, or a common growing baby asset. They might leverage distributed computing to get the capability, but I guarantee they would handle the actual asset themselves.
In general; that's not how anything on my OS is supposed to be done (and people used to creating software for Windows/Linux will take time to get used to it).

How things are supposed to be done is that nobody makes a whole application, or a whole game, or a whole GUI. Instead; they participate in the creation of standard file formats and standard messaging protocols; then people create individual processes and services that comply with those standards; where an application/game/GUI consists of multiple processes/services written by completely unrelated people, and each of those pieces can be replaced by anyone at any time for whatever reason.
Rusky wrote:At best your plan will create a kind of sandbox world with a community that creates the equivalent of in-depth mods for it. It will not affect how other games are made.
Initially, you're right.

As each of the pieces is improved one by one and the "sum of the parts" evolves and gains features and code quality; eventually that community of mods grows into something superior to anything traditional game developers have ever made; while at the same time the cost of developing a game like GTA V (1000 people for about 4 years at a cost of over $250 million) drops to "3 high school kids slapped it together in their spare time over 6 months" because they don't need to create models for cars, or create buildings, or do animation, or handle AI, or figure out physics, or write a quest system, and only need to do story writing and "glue". 8)


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Tue Aug 04, 2015 8:29 pm
by Rusky
Brendan wrote:they don't need to create models for cars, or create buildings, or do animation, or handle AI, or figure out physics, or write a quest system, and only need to do story writing and "glue".
This is exactly how you get "just the same tired old crud from 10 years ago (just with slightly better graphics than last time)," only the tired old crud will also be derivative and generic like 90% of the games written by "high school kids slapping it together in their spare time over 6 months," using all the pre-existing models and physics and quest systems that already exist.

Have a fun pipe dream.

Re: Concise Way to Describe Colour Spaces

Posted: Tue Aug 04, 2015 9:15 pm
by Brendan
Hi,
Rusky wrote:
Brendan wrote:they don't need to create models for cars, or create buildings, or do animation, or handle AI, or figure out physics, or write a quest system, and only need to do story writing and "glue".
This is exactly how you get "just the same tired old crud from 10 years ago (just with slightly better graphics than last time)," only the tired old crud will also be derivative and generic like 90% of the games written by "high school kids slapping it together in their spare time over 6 months," using all the pre-existing models and physics and quest systems that already exist.
Wrong.

How you avoid "just the same tired old crud from 10 years ago" is you allow game developers to focus on the parts that are actually new and different and help them avoid wasting massive amounts of time and $$ doing the parts the are the same over and over and over again.

Something like GTA V would be relatively easy because it is (mostly) a realistic sandbox and I'd expect a lot of pieces wouldn't need to be created specifically for it (but the story writing and quests would need to be written and can't be recycled). Something like StarCraft II would be much more artwork (because all the entities are unique and would need to be created).


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Tue Aug 04, 2015 9:36 pm
by Rusky
If you take a look at the most interesting games, there's not much correlation between how much they reused existing tech. On one side, you have all the story-heavy games from places like BioWare, which just reuse existing engines. On the other side, you have a lot of indie games like Minecraft or Braid that were built from scratch.

If you look at the easiest-to-reuse engines, like Unity or GameMaker, there is an overwhelming tide of terrible, unoriginal "me too" games. This says to me that your hypothesis, while it makes intuitive sense, is not true. A rising tide of easy reuse raises all games, and pulls more crappy ones above the threshold where they can exist.

This isn't to say easy reuse is a bad thing, just that your style of reuse isn't going to help games. Your style of reuse makes mod-style games easier, but more powerful tools and engine reuse are what make good games easier to make, because good games won't be able to reuse your chicken AI.

Re: Concise Way to Describe Colour Spaces

Posted: Wed Aug 05, 2015 5:54 am
by embryo2
Rusky wrote:If you look at the easiest-to-reuse engines, like Unity or GameMaker, there is an overwhelming tide of terrible, unoriginal "me too" games.
As I see it, Brendan wishes to limit the "overwhelming tide of terrible...". But employs an approach "I will control everything". Such approach won't work (and Brendan sees it), so he adds some committees and other processes that help to elaborate the system. But the public work (all those committees) is not the area where Brendan excels. So, the only way is still "I will control everything".

The "overwhelming tide of terrible..." is the society problem, but it is seem to me that Brendan wants to fight it with his developer skills only. Also the "really good OS" or "really good game" is the society thing. But some technical solutions can help the society to minimize it's problems, so, may be Brendan can create such a tool set and finally really help to decrease the "overwhelming tide of terrible...".

Re: Concise Way to Describe Colour Spaces

Posted: Wed Aug 05, 2015 7:45 am
by AndrewAPrice
Your "process per object" is very much Agent-oriented programming. I thought I came up with the idea back in university until my professor told me that it has already been done.

The main problem with this approach is that it's not the most efficient way to handle process-intensive agents. For example, your physics agent tries to created a thread per core, rendering agent tries to create a thread per core, etc. But suddenly, because your have more than one worker thread per core, the OS might end up assigning most of your worker theads from the same agent to theat same core, losing the parallelism advantages. The other disadvantage is that game developers like to keep things in lockstep (which is especially important in network games - think Age Of Empires - instead of broadcasting what thousands of little simulated people are doing, it just sends the high level user actions across the network, with every client running a deterministic simulation in lockstep.)

The mainstream approach to scalable parallellism in high end game engines today is the thread pool pattern. If the game is running on a 6-core computer, then 6 worker threads are created. There is a pool of tasks, and if a thread isn't processing a worker thread grabs the next task to be done.

Simple example: You have 100 objects in your game with their own update routine, so at the start of the update loop you add your 100 update calls to the task pool, and let the worker threads go crazy.

Another simple example: Your graphics renderer is trying to walk an octree (a recursive data structure) to find out what is visible on the screen. Each time you scan a node, you add the function call to scan the children onto the pool of tasks.

The thread pool pattern gets the best processing throughput and is relatively low overhead and as long as there is work to do and your tasks are fairly fine grained, all your processor cores tend to be fully utilized.

The main problem with this approach is that it requires tasks to cooperate. A hanging task holds up a worker thread. And I haven't seen anyone try to solve this when multiple processes are running (e.g. if there are multiple programs running with 1 thread per core per process), so I've been fascinated with the idea of an OS that provides a global task pool.

Also, about the idea of having a library of objects is the motivation behind Unity's Asset Store. You can import high quality trees and road pieces to build a track, import a high quality car model with physics and controls already programmed in, import a Car AI library, slap on a nice menu, and you have a realistic car racing game in under a day.

Re: Concise Way to Describe Colour Spaces

Posted: Wed Aug 05, 2015 8:10 am
by AndrewAPrice
embryo2 wrote:
Rusky wrote:If you look at the easiest-to-reuse engines, like Unity or GameMaker, there is an overwhelming tide of terrible, unoriginal "me too" games.
As I see it, Brendan wishes to limit the "overwhelming tide of terrible...". But employs an approach "I will control everything". Such approach won't work (and Brendan sees it), so he adds some committees and other processes that help to elaborate the system. But the public work (all those committees) is not the area where Brendan excels. So, the only way is still "I will control everything".

The "overwhelming tide of terrible..." is the society problem, but it is seem to me that Brendan wants to fight it with his developer skills only. Also the "really good OS" or "really good game" is the society thing. But some technical solutions can help the society to minimize it's problems, so, may be Brendan can create such a tool set and finally really help to decrease the "overwhelming tide of terrible...".
Perhaps it's because we've reached a stage that the tools really are too simple that anyone can create a basic game. A kid can download the Infinite Runner starter kit, replace the player with a Chicken they imported from the Asset Store, change the scenary with barns and fences they also imported, and simultaneously release their game on desktop, web, and mobile in the same weekend. Some amazing stuff is made in Unity/Game Maker, but naturally the "tide of terrible" is going to increase the easier you make the tools. This is more a problem in their lack of skills and ideas, rather than the tools.

This applies to anything that can be done on a cheap budget- websites, home videos, etc.

I think filtering out the tide of terrible is a job for App Stores with better search, categorization, and ranking, and websites that offer reviews and user ratings.

Re: Concise Way to Describe Colour Spaces

Posted: Wed Aug 05, 2015 8:52 am
by Brendan
Hi,
Rusky wrote:If you take a look at the most interesting games, there's not much correlation between how much they reused existing tech. On one side, you have all the story-heavy games from places like BioWare, which just reuse existing engines. On the other side, you have a lot of indie games like Minecraft or Braid that were built from scratch.
And yet if you actually think about breaking it up into pieces, Minecraft becomes:
  • a generic renderer (capable of supporting "grids of blocks", as discussed previously)
  • a generic inventory management system for player inventories, chests, etc
  • a generic recipe system to determine which items/ingredients get converted to what using which method of conversion (heat, redstone level changes, crafting)
  • a generic 3D sound system
  • a generic "entity stats" system (for tracking health, hunger, XP points, current potion effects, etc)
  • generic timer support (provided by kernel)
  • a generic deterministic random number generator (too small, cut & paste that)
  • either a generic electronics simulator (better) or something custom designed to match how crappy/quirky redstone is in the real Minecraft
  • either a generic NPC AI system (better) or something custom designed to match how crappy mob AI is in the real Minecraft
  • either a generic combat and player control system (better) or something custom designed to match how crappy combat is in the real Minecraft
  • either a generic physics system (better) or something custom designed to match how crappy physics is in the real Minecraft
  • either a generic weather and day/night system (better) or something custom designed to match how crappy it is in the real Minecraft
  • custom "glue" to handle updating/deleting blocks based on the other systems, making saplings instantly become trees, etc
  • custom code to generate new chunks from the world seed
  • custom code to handle saving/loading data
  • custom code for the pre-game user interface and in-game user interface
  • custom meshes and textures for mobs, player, blocks (and sky if you want it to look the same)
I'd estimate (depending on how you split it up and whether you want "better" or "clone", and whether you want to split it into more processes to give to OS more flexibility distributing load) about half the code doesn't need rewriting.

For Braid, I don't know (I've never seen it); but the Wikipedia page makes it sound like a generic platform game and a generic "jigsaw puzzle" part; except the "glue" records where things are while you're playing and allows that data to be replayed.
Rusky wrote:If you look at the easiest-to-reuse engines, like Unity or GameMaker, there is an overwhelming tide of terrible, unoriginal "me too" games. This says to me that your hypothesis, while it makes intuitive sense, is not true. A rising tide of easy reuse raises all games, and pulls more crappy ones above the threshold where they can exist.

This isn't to say easy reuse is a bad thing, just that your style of reuse isn't going to help games. Your style of reuse makes mod-style games easier, but more powerful tools and engine reuse are what make good games easier to make, because good games won't be able to reuse your chicken AI.
If reuse raises all games and pulls more crappy ones above the threshold where they can exist; then I don't see much of a problem (assuming there's some sort of rating system/reviews that users can use to filter out unpopular/crappy games).

However, what I'd be hoping for is much much larger games. Specifically; instead of one game where you're stranded somewhere and try survive, then a separate game where you play the role of city council and build/manage a town, then a third game where you play as a criminal within a city, then a fourth game where you manage a football team, then a fifth game where you're a soldier who's sent to fight in various battles, and a sixth game where you're running a country and sending your armies to conquer other countries; what I imagine is a single game where you're stranded somewhere and try to survive, and build a small settlement, and create a council/government, and build it into a large city, and can decide to play as a criminal within that city or manage a football team or be a taxi driver or gardener or architect or soldier, or move to another city (either computer generated or player/developer designed) and do any of 100 jobs in that city, or just run the entire country and its armies.

Basically; I want a massive simulated universe of many worlds acting as a (multi-player) sandbox game; where each player chooses which role they want to play within that universe (from small things like being rat catcher in a specific city on a specific planet, to massive things like trying to achieve universal domination by conquering worlds) and where any player can switch between roles whenever they like; and where anything a player isn't doing is taken over by AI.


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Wed Aug 05, 2015 11:46 am
by Brendan
Hi,
MessiahAndrw wrote:Your "process per object" is very much Agent-oriented programming. I thought I came up with the idea back in university until my professor told me that it has already been done.
Your professor was right.

The idea of "software as cooperating isolated pieces that communicate" dates back at least as far as the actor model(1973), is mostly what Alan Kay meant when he invented the term "OOP", is the basis of highly fault tolerant systems, plays a significant role in large scale/highly parallel systems (e.g. super-computers), and is mimicked by networking protocols and the modern Internet.

It has had many names, and even though the details (e.g. the specifics of the communication mechanism, and the granularity - large numbers of tiny objects vs. smaller number of large objects) vary significantly, the underlying concept remains the same.

In general there's a compromise between communication costs and parallelism; and this compromise determines granularity. Basically when there's high communication costs you want a small number of large objects (to minimise the communication costs), and when there's negligible communication costs you want a large number of tiny objects (to maximise parallelism).

For my approach, the granularity of objects varies - you can have a process for each individual chicken, or a process for a collection of many chickens, or a process for a large collection of animals. Within each process you have multiple threads, and they could be used as "one thread per sub-object" but they could also be used as "master thread and worker threads" or anything else.
MessiahAndrw wrote:The main problem with this approach is that it's not the most efficient way to handle process-intensive agents. For example, your physics agent tries to created a thread per core, rendering agent tries to create a thread per core, etc. But suddenly, because your have more than one worker thread per core, the OS might end up assigning most of your worker theads from the same agent to theat same core, losing the parallelism advantages.
Several things here...

First; you're assuming that all threads consume as much CPU time as they can. For most threads this isn't the case - they'll sit idle most of the time waiting for a request/message to arrive, then handle that request and send a reply, then go back to being idle. If 1 thread handling 100 requests per second uses 10% of one CPU, then 100 threads each handling 1 request per second uses 10% of one CPU.

Real-time rendering is special because it can easily use all CPU time you throw at it and still not get enough CPU time; but this means it must be able to sacrifice quality (either detail or frame rate or both) when it doesn't get enough CPU time, and that means the scheduler can give it whatever amount of time it likes (e.g. any CPU time other threads didn't use, and/or limit it to a percentage of total CPU time).

You're also assuming that all threads are equally important. They mostly aren't. Something that (e.g.) determines if grass and plants should grow, or fire should spread, or if that chicken far in the distance lays an egg? These aren't very important and can be low priority threads. A keyboard driver? That's very important and needs very high priority. Mostly, thread priorities are used to determine what gets sacrificed/postponed when there isn't enough CPU time.

Finally; you're assuming that any process can use all CPUs. For a distributed system like mine this isn't true - a process can only use CPUs that are within one computer (and processes are typically limited to CPUs within one NUMA domain instead). If physics takes a thread per core on one computer, and renderer takes a thread per core on another computer, then they're not competing for CPU time.
MessiahAndrw wrote:The other disadvantage is that game developers like to keep things in lockstep (which is especially important in network games - think Age Of Empires - instead of broadcasting what thousands of little simulated people are doing, it just sends the high level user actions across the network, with every client running a deterministic simulation in lockstep.)
What you see as a disadvantage, I see as an advantage.

Essentially you're breaking time up into small pieces, and any events that happen within a time period you pretend didn't happen until the end of the time period. For what you're thinking the time periods have to be larger than "round trip network latency"; and for slower networking (Internet) the time periods need to be large (relative to a human's perceptions) which translates directly into "laggy" (or network latency has a direct effect on responsiveness). It is also unfair/racey, in that one player can do an action before the other player does, but both players see the later action occurring first and the earlier action occurring later.

Now imagine what happens if you break time into much smaller pieces, like nanoseconds. When the time period used is less than network latency you can't rely on "committed state" alone (actions that have occurred may not be received yet); and you need to have both "committed state" and "predicted future state" plus the ability to deal with false predictions. This is a little harder. However, it is not unfair/racey, and it does not have the "network latency has a direct effect on responsiveness" relationship. Instead, higher latency means a higher chance of false predictions and more corrections. Depending on how you make predictions and how you correct them, these corrections can be almost entirely unnoticeable even with severe network latency (e.g. packets being dropped due to congestion). Also note that you can still bunch up multiple actions that have occurred at slightly different times and send a single report (it's just that you'd be attaching time stamp to each action rather than having a single time that's "assumed" for all actions).
MessiahAndrw wrote:The mainstream approach to scalable parallellism in high end game engines today is the thread pool pattern. If the game is running on a 6-core computer, then 6 worker threads are created. There is a pool of tasks, and if a thread isn't processing a worker thread grabs the next task to be done.

Simple example: You have 100 objects in your game with their own update routine, so at the start of the update loop you add your 100 update calls to the task pool, and let the worker threads go crazy.
Yes; this doesn't scale to 2 or more computers and only does a crude/broken approximation of continuous time.
MessiahAndrw wrote:Also, about the idea of having a library of objects is the motivation behind Unity's Asset Store. You can import high quality trees and road pieces to build a track, import a high quality car model with physics and controls already programmed in, import a Car AI library, slap on a nice menu, and you have a realistic car racing game in under a day.
That's very similar to what I want; except I want it fully standardised (so all assets of a certain type have the same controls, etc) and with meta-data (so that processes can dynamically choose from available assets), and without the separation between "developer community" and "modding community".


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Wed Aug 05, 2015 3:25 pm
by AndrewAPrice
Brendan wrote:First; you're assuming that all threads consume as much CPU time as they can. For most threads this isn't the case - they'll sit idle most of the time waiting for a request/message to arrive, then handle that request and send a reply, then go back to being idle. If 1 thread handling 100 requests per second uses 10% of one CPU, then 100 threads each handling 1 request per second uses 10% of one CPU.
I think synchronous thread-blocking IO should die, and an event-based system built around asynchronous messaging deploying tasks would be much better - as long as the programming environment (language, API) makes it natural to do so. Node.js isn't perfect, but it gives an example of what asynchronous IO can look like.
Brendan wrote:
MessiahAndrw wrote:The mainstream approach to scalable parallellism in high end game engines today is the thread pool pattern. If the game is running on a 6-core computer, then 6 worker threads are created. There is a pool of tasks, and if a thread isn't processing a worker thread grabs the next task to be done.

Simple example: You have 100 objects in your game with their own update routine, so at the start of the update loop you add your 100 update calls to the task pool, and let the worker threads go crazy.
Yes; this doesn't scale to 2 or more computers and only does a crude/broken approximation of continuous time.
You're right that it doesn't, at least not alone. I like the actor model for OOP and messaging, but what if the actors use tasks, rather than threads, to handle messages and do processing? There should be no 'wait for message', a task will be created when a message is received to process that message. There should be no 'sleep for x seconds', you'd just tell the timer to send you back a message after x seconds, and that'll create a task to process that message. I'm thinking this task/messaging system could be very light weight, with little more overhead than a function call. Sending a message is as lightweight as adding a task to the task queue, except that task may be in a different process.

In pseudo code:

Code: Select all

// When Chicken object was created:
Timer.createRepeatingEvent(30, myChicken.think);

function Chicken::think() {
   // runs 30 times per second, Chicken's update logic goes in here
}

// inline handling
myChicken.onMessage("hit with arrow", function(string person_who_hit_me) {
    SpeechBubble.above(myChicken, "Ow, don't hit me " + person_who_hit_me + "!");
});

//////////////////////////////////////////
// from some other process that has a reference to our Chicken
//////////////////////////////////////////
// send a Chicken a "hit with arrow" message from "Andrew"
someChicken.sendMessage("hit with arrow", ("Andrew"));

// maybe our Chicken interface can expose some events we can listen to:
someChicken.onDeath(function() {
   // oh no! our chicken died!
});
If you have a reference to a process/service/actor, this model will work whether it's on the same machine or across a network. A task will be created and added to the task queue on the machine the actor is running. This system should scale across cores and networks. This Chicken application can be in our process, another process, or another machine. It also makes it easy to program parallel algorithms within the same actor (a raytracer can divide the screen into tiles and call a task per tile), and all these tasks would be mixed together with all the cores trying to churn through as many tasks as possible.

Some technical problems with implementing this model:

1. What if the actor disappears? The process quits, the network connection is cut? Potential solution: In an asynchronous API, your function call to send a message could simply return 'false' (as actors would communicate back by sending a new message, rather than return a value through the function call) or throw an exception.

2. What about the overhead of constantly context switching between processes as you rapidly switch between tasks? Still thinking about this one, but I'm hoping to make it relatively light weight by relying on software isolation rather than hardware isolation. But would you not also be rapidly context switching in any event-based system that sends messages then sleeps? Maybe we can set some kind of 'processor affinity' with a task queue per core, and only if a processor core doesn't have any tasks in its own queue it steals tasks from another core's queue. In this approach, if you have two heavy weights (like a physics engine and a rendering engine) that are always processing away their tasks would gravitate towards their own processor core.

3. What if a task gets stuck in a loop or is taking a very long time to process? Could it freeze the operating system by preventing other tasks from running? Since most tasks are light weight and fine grained (event handlers that do one thing, or a single step in a recursive algorithm), I think this won't be an issue for most cases. However, there will be times that a buggy task gets stuck in an infinite loop or you have a long running algorithm. I think the OS should easily be able to detect this (e.g. it sees a task is running for more than one time slice), and can turn that task into a traditional preemptive thread. This could happen transparently, and the programmer only needs to think in the abstraction of parallel tasks.

4. What about something that maliciously spawns lots of tasks and fills the task queue up? I haven't solved this problem, but it's the same as an application sending lots of messages or spawning lots of threads in another other system. I'll have to think about this.

What do you think, Brendan?

Re: Concise Way to Describe Colour Spaces

Posted: Wed Aug 05, 2015 5:18 pm
by Brendan
Hi,
MessiahAndrw wrote:
Brendan wrote:Yes; this doesn't scale to 2 or more computers and only does a crude/broken approximation of continuous time.
You're right that it doesn't, at least not alone. I like the actor model for OOP and messaging, but what if the actors use tasks, rather than threads, to handle messages and do processing? There should be no 'wait for message', a task will be created when a message is received to process that message. There should be no 'sleep for x seconds', you'd just tell the timer to send you back a message after x seconds, and that'll create a task to process that message. I'm thinking this task/messaging system could be very light weight, with little more overhead than a function call. Sending a message is as lightweight as adding a task to the task queue, except that task may be in a different process.

In pseudo code:

Code: Select all

// When Chicken object was created:
Timer.createRepeatingEvent(30, myChicken.think);

function Chicken::think() {
   // runs 30 times per second, Chicken's update logic goes in here
}

// inline handling
myChicken.onMessage("hit with arrow", function(string person_who_hit_me) {
    SpeechBubble.above(myChicken, "Ow, don't hit me " + person_who_hit_me + "!");
});

//////////////////////////////////////////
// from some other process that has a reference to our Chicken
//////////////////////////////////////////
// send a Chicken a "hit with arrow" message from "Andrew"
someChicken.sendMessage("hit with arrow", ("Andrew"));

// maybe our Chicken interface can expose some events we can listen to:
someChicken.onDeath(function() {
   // oh no! our chicken died!
});
If you have a reference to a process/service/actor, this model will work whether it's on the same machine or across a network. A task will be created and added to the task queue on the machine the actor is running. This system should scale across cores and networks. This Chicken application can be in our process, another process, or another machine. It also makes it easy to program parallel algorithms within the same actor (a raytracer can divide the screen into tiles and call a task per tile), and all these tasks would be mixed together with all the cores trying to churn through as many tasks as possible.

Some technical problems with implementing this model:

1. What if the actor disappears? The process quits, the network connection is cut? Potential solution: In an asynchronous API, your function call to send a message could simply return 'false' (as actors would communicate back by sending a new message, rather than return a value through the function call) or throw an exception.

2. What about the overhead of constantly context switching between processes as you rapidly switch between tasks? Still thinking about this one, but I'm hoping to make it relatively light weight by relying on software isolation rather than hardware isolation. But would you not also be rapidly context switching in any event-based system that sends messages then sleeps? Maybe we can set some kind of 'processor affinity' with a task queue per core, and only if a processor core doesn't have any tasks in its own queue it steals tasks from another core's queue. In this approach, if you have two heavy weights (like a physics engine and a rendering engine) that are always processing away their tasks would gravitate towards their own processor core.

3. What if a task gets stuck in a loop or is taking a very long time to process? Could it freeze the operating system by preventing other tasks from running? Since most tasks are light weight and fine grained (event handlers that do one thing, or a single step in a recursive algorithm), I don't this won't be an issue for most cases. However, there will be times that a buggy task gets stuck in an infinite loop or you have a long running algorithm. I think the OS should easily be able to detect this (e.g. it sees a task is running for more than one time slice), and can turn that task into a traditional preemptive thread. This could happen transparently, and the programmer only needs to think in the abstraction of parallel tasks.

4. What about something that maliciously spawns lots of tasks and fills the task queue up? I haven't solved this problem, but it's the same as an application sending lots of messages or spawning lots of threads in another other system. I'll have to think about this.

What do you think, Brendan?
I think the nice thing about "one or more objects per thread, no data shared between threads" is that locks/mutexes are never needed anywhere (except in the kernel); which makes it much easier to write scalable software.

For your idea, there's 2 very different cases. If an object receives 2 or more messages at the same time and the kernel creates 2 or more threads that all operate on the object's data at the same time, then you end up needing locks/mutexes and it ends up complicated.

Alternatively; if an object receives 2 or more messages at the same time and the kernel only creates 1 thread and postpones the other messages until the first thread terminates; then there's no locking needed. In this case it's mostly the same as my system, except that you terminate a thread where I block the thread, and you create a thread where I unblock a thread. However; it's only "mostly the same"; and causes subtle differences in the way threads and messaging behave.

For example, you can't do something like this:

Code: Select all

    do {
        if( check_for_message_without_blocking() == GOT_MESSAGE) {
            handle_message();
        }
        finished = do_some_more_work();
    } while( !finished );
This is actually relatively important for things that take lots of CPU time but need to listen for messages saying "Hey, cancel that long running/expensive work I asked you to do earlier; I don't need it anymore!".

You also can't (internally) re-order requests and do them in order of request priority (instead of doing them in order of arrival); like this:

Code: Select all

    while( true ) { 
        get_for_message_with_blocking();
        handle_message();             // Adds request to an internal queue according to request's priority

        while( (next_request = get_highest_priority_pending_request()) != NULL) {
            do_request(next_request);
            while( check_for_message_without_blocking() == GOT_MESSAGE) {
                handle_message();     // Adds request to an internal queue according to request's priority
            }
        }
    }
This is important for a lot of things (service, drivers, etc). For example, imagine a hard disk driver where you've got 10 medium priority requests for normal files, then receive a high priority request from kernel asking for data from swap space, then receive a very high priority "IRQ occurred" message from kernel.


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Wed Aug 05, 2015 5:23 pm
by Rusky
Brendan wrote:And yet if you actually think about breaking it up into pieces, Minecraft becomes:
...
I'd estimate (depending on how you split it up and whether you want "better" or "clone", and whether you want to split it into more processes to give to OS more flexibility distributing load) about half the code doesn't need rewriting.
I never said Minecraft had to be rewritten, only that it was part of a body of evidence showing that the ability to re-use code isn't necessarily correlated with higher-quality games.
Brendan wrote:For Braid, I don't know (I've never seen it); but the Wikipedia page makes it sound like a generic platform game and a generic "jigsaw puzzle" part; except the "glue" records where things are while you're playing and allows that data to be replayed.
Braid's time travel is a lot more complicated than just rewinding and replaying.
Brendan wrote:Basically; I want a massive simulated universe of many worlds acting as a (multi-player) sandbox game; where each player chooses which role they want to play within that universe (from small things like being rat catcher in a specific city on a specific planet, to massive things like trying to achieve universal domination by conquering worlds) and where any player can switch between roles whenever they like; and where anything a player isn't doing is taken over by AI.
Cool, but don't fool yourself into thinking that's any kind of replacement for other games with smaller scopes.