New Time Properties and Animation (Kiwi.js v1.1.0)

Hello, and welcome to the sixth in a series of in-depth articles about Kiwi.js v1.1.0, now available for use at Today we’re talking about time and how it relates to smooth motion. I’m Ben Richards, and today I’ll show you some incredibly useful, incredibly simple animation tools.

Who Needs to Read This Post?

All users. Kiwi.js is built to move sprites around the screen, and these tools make it possible to do this much more smoothly.

Time Properties

We’ve added some new properties to the Game and its components. Together with those already available, these will give you more control over your scene. Key properties include:

Technical Information

This explains exactly how these properties work.


This property counts the number of frames that the game has drawn since it loaded up.

This value is potentially inaccurate, because some frames may be dropped by the browser if it is busy. We recommend you use game.idealFrame for most situations. However, game.frame is guaranteed to be regular: it increases 1 per frame. It is thus useful if you value even intervals more than smooth time.

You can set the value of game.frame yourself. This should not be done unless you know what you’re doing.


This property counts the number of frames that would have been drawn since the game loaded up, had it maintained a perfect frame rate.

Use this property to drive smooth cyclic animations. For example, consider a tree swaying in the wind. Simply set tree.rotation = 0.1 * Math.sin( game.idealFrame * 0.01 ) every frame, and it will sway smoothly, even if the frame rate slows.

Unlike game.frame, this property cannot be changed by the user.


This property controls the number of frames Kiwi.js attempts to render per second.

The maximum value is defined by your browser. It is normally 60. Kiwi.js defaults to 60 frames per second. You do not normally need to change this.

The minimum value is any number larger than 0. You might want a frame rate of 0.016, if you intend to update only once per minute. However, you cannot have a frame rate of 0.

When you set frame rate, it communicates to several behind-the-scenes components and updates some properties. Key among these is game.time.master.idealDelta, which will be explained later.

A delta is a unit of change. In this case, it refers to the number of milliseconds since the last frame.

Under normal circumstances, this should be about 16.7. There are 1000 milliseconds per second, and if you have 60 frames in that second, each frame takes (1000 / 60) = 16.7 ms.

This value is not as useful as some others, as we have provided useful derivatives, but it is available for your use.


This measures the time, in milliseconds, since the game loaded up.

We use this value, divided by game.time.master.idealDelta, to determine game.idealFrame.

This property tells you the time as measured at the start of this frame. You should use this value instead of to ensure that all game objects are using the same value. It may take a few milliseconds to process the entire scene, and this may throw off certain calculations if you do not use a synced value.


This property tells you how many ideal frames have passed since the last frame. It is one of the most useful properties in the time toolbox, and we’ll be spending some time explaining its nuances later in this article.

rate is determined by the delta divided by the idealDelta. For example, if you have a framerate of 60, your ideal delta is 16.7ms. If your last frame took 16.7ms (the actual delta), then the rate is 1.0. But if your last frame took 33.4ms, then your rate is 2.0. This tells you that you should have had 2 whole frames in that time. This is very useful information for creating smooth animations.

Note that rate makes no guarantees about the current frame. It cannot tell how long it will take to render. It can only say how long the last frame was. If you are using rate to govern speed smoothing, and you have a very uneven frame rate, any individual frame may be slightly off from where it should be. However, this is better than the alternative, where any deviation in frame rate will permanently alter the offset. After 1000 frames of half-frame-rate performance, a non-compensated solution will be 500 frames off; but a solution that uses rate to compensate will only ever be 1 frame off.


This property is calculated from game.frameRate. It is precisely equal to the number of milliseconds that should be taken per frame. With the default frame rate of 60 frames per second, this is 16.7ms. Ideal delta is used to compute other parameters.

And Now, Physics

You probably don’t consider yourself a physicist. If you do, what are you doing here? We have a dark energy cosmological paradigm to analyse! Shoo, shoo! Back to your laboratories!

However, you know a thing or two about physics. You can’t help it. You know that, when a mushroom man jumps up in the air, he’ll come down again. And you know that he’ll sort of do a curving motion as he goes.

We’re not interested in drawing paths or elegant graphs. All we care about is movement between frames. Once a frame is gone, it’s forgotten forever. This makes it very easy to think about physics.

Some Terms

We need four terms to make elegant motion: time, position, velocity and acceleration.

t = Time

Time is key to everything. It’s a one-dimensional arrow pointing ever forward.

p = Position

Position is simply where you are at this moment. It has no interest in time. Position will not change unless there is velocity.

Position, like velocity and acceleration, can be expressed as a vector of multiple dimensions. These are usually x and y, and sometimes z or other hyperdimensions. Fortunately, we can consider each dimension separately, so we will consider only one.

v = Velocity

Velocity is the change in position over time. In other words, when time passes, position changes. In coding terms, p += v * t. Velocity will not change unless there is acceleration. If velocity is negative, this simply means you are moving the other way. (This is in contrast with speed, which is always positive. A race car is not going at -100 if it turns around halfway down the track.)

a = Acceleration

Acceleration is the change in velocity over time. In other words, when time passes, velocity changes. In coding terms, v += a * t. Acceleration represents forces in the world: the thrust of an engine, friction of a surface, or gravity pulling you down.

Let’s Dash

Here’s a mushroom man. He’s off to rescue the President. But he’s on a time limit – he can’t dawdle! So he runs along.

This code will look familiar to anyone who’s completed basic Kiwi.js tutorials:

This seems pretty straightforward.

Let’s Hop

Imagine that our mushroom man decides to jump into the air. Here’s some code to govern his trajectory (please don’t expect this to look good on screen, or function at all in Kiwi.js):

This code will send our mushroom man up into the air at his base velocity. But this velocity will gradually change, slowing down, until he hangs at the peak of his jump. Then he will accelerate downwards until he reaches the ground again.

So far, so good.

Temporal Distortion Strikes

The mushroom man has 5 minutes to reach the President. At a normal pace, he’ll get there in 3 minutes. All seems well.

But what happens when the computer running our mushroom man slows down? Say it’s running at half speed. Now the mushroom man will get there in 6 minutes! That’s too slow! (And his animation will be too fast, because it works on time, not frames. He’ll be running in place!) What can we do?

Well, we could change our time limit based on the frame rate, but that would be complicated and it wouldn’t make the game run any faster.

It’s much better to smooth out the mushroom man’s movement. And to do this, we’re going to use the rate property.

Smooth Dash

This is a nice, simple fix. Remember that p += v * t. All we need to do is multiply velocity by time. We had previously assumed that time was always 1 (a single regular frame); but because rate tells us exactly how many frames should have passed since the last frame, we can use that instead.

Now, if the game is running at half frame rate, it doesn’t matter. The rate is 2, so the mushroom man runs twice as far per frame.

Note that we are not altering the actual value of mushroomMan.speed. This would quickly spiral out of our control: from 3 to 6 to 12 to 24. The mushroom man would hurtle uncontrollably into the distance. Soon he would reach the stars.

Smooth Hop

This one requires a little more thought. Velocity is not constant in a jump. It changes with time. However, this is no more complex than it has to be. When something changes with time, simply multiply by rate and it will be corrected. The code becomes as follows:

Now, when the mushroom man jumps in a low frame rate computer, he will move up further every frame, but his velocity will fall off faster every frame as well. Taken together, his trajectory is practically identical to that of a jump on a faster computer.

Rate-governed trajectories

As you can see, the trajectory has some artifacts because of its disjointed state, but it’s better than slowing down, right? This diagram exaggerates the artifacts; in reality, only extremely fast or forceful interactions would be distorted this much.

Going Slower? Go Faster!

This is the fundamental purpose of rate: to give you a speed governor, allowing you to move further when you slow down. We recommend you consider using rate as part of your standard workflow.

We’ve already found rate to be a useful asset in animating smooth action. We hope you do too!

In Review

When time slows down, you now have the tools to fix it. Just remember: if it changes with time, multiply by rate.

Benjamin D. Richards

New Blend Modes and Renderer Tools (Kiwi.js v1.1.0)

Hello, and welcome to the fifth in a series of in-depth articles about Kiwi.js v1.1.0, now available for use at Today we’re talking about blend modes and how they unlock new possibilities for drawing fantastic effects. I’m Ben Richards, and today I’ll show you some new ways of handling Renderers in Kiwi.js.

This post refers only to the WebGL rendering system. Canvas rendering does not support this functionality at this time.

Who Needs to Read This Post?

Advanced users. If you want to write your own shaders, this contains important information. If you’re interested in particle effects or other special effects, this is also important.

WebGL Blending

When you’re rendering with WebGL, you usually render several things. For example, a city street, a ghost detective, and the rain. All these things overlap. But how do they overlap? How does the graphics card choose how much color to let through that ghost detective? He is a ghost, after all.

Blend Functions

The answer lies in the blendfunc or blend function. This is a mini-pipeline that is evaluated every time a pixel is drawn to the screen. (That is a simplification, but it’s essentially what’s happening.) The blendfunc compares two pieces of data:

  • src or source pixel, the pixel that is trying to be drawn.
  • dst or destination pixel, the pixel that was already there.

A conventional blendfunc looks like this:

This tells the video card how to blend the src onto the dst. The actual equation is a bit more complicated.

Here we must understand three things: channels, factors, and equations.


A channel is a single value that describes part of a color. The video card considers a color to be made of 4 channels: RGBA, or red green blue and alpha.

The video card thinks of colors as values between 0 and 1. This is mathematically convenient. Other programs may represent them as 0-255, or 0-100, but these are all abstractions for a pattern of bits.

The alpha channel is particularly important. It doesn’t mean anything intrinsic and it doesn’t ever reach the screen. However, you almost always use it to represent transparency.


The factors are specified in the blendfunc declaration. These are numbers that multiply the contents of a channel. Many of them are derived from the contents of src or dst channels. The permissible values are:

  • ZERO
  • ONE

Most of these take their value directly from the colors being blended. For example, ONE_MINUS_SRC_ALPHA will return 0.7 if the src alpha channel has the value 0.3.

SRC_ALPHA_SATURATE has a special value: it is the minimum of either src alpha or (1 – dst alpha).

The CONSTANT types use special additional parameters. You set these using gl.blendColor(1.0, 1.0, 1.0, 1.0) or similar values. These just define constant channels that are drawn from neither src nor dst.


The equation takes the channels and the factors and blends them together. There are three possible equations, depending on the mode set by blendEquation:


These resolve as follows:

So let’s take the above example of gl.blendfunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA), and see how this affects a color. Take the src RGBA value (1.0, 0.5, 0.0, 0.5), a kind of transparent orange, and the dst RGBA value (0.5, 0.5, 0.5, 0.2), a kind of faint gray. How do these numbers blend using FUNC_ADD?

Each channel goes into the blend equation separately. Very simple operations are performed and combined together. In this case, SRC_ALPHA is 0.5, and ONE_MINUS_SRC_ALPHA is therefore also 0.5.

Problems with Blend Functions

You may be able to see an issue with the numbers above. Look at just the alpha channel, used to represent transparency. We took a background of 0.2, applied a foreground of 0.5, and the result was… 0.35. How can it be more transparent than the foreground we added?

The answer is simple: we’ve performed the wrong operation on the alpha channel.

But the alpha channel doesn’t render to screen, right? We only see RGB data, and the A is meaningless. Well, no. The alpha channel is still used to composite the data onto the screen. In the case of Kiwi.js, this actually caused a fairly severe bug at one point, where transparent objects would render the web page behind the game. This tended to look like unnatural whiteness, the color of a default page. A bad calculation on the alpha channel can cause major issues with the final output.

Diagram showing blend error in action

Separate Blend Functions

Fortunately, the OpenGL standard provides a solution: blendFuncSeparate and blendEquationSeparate. These allow you to specify different factors and equations for the alpha channel.

Using these, we can set the graphics card to blend as follows:

The second half of each parameter list specifies the alpha behaviour.

If we run this blendfunc over the blend operation above, the RGB channels will resolve as normal (0.75, 0.5, 0.25). The alpha channel will resolve as follows:

Now, this is much more logical. The opacity of the two colors is adding together. The separate blendfunc allows us to deliver blends that make perfect sense.

Blendfuncs in Kiwi.js

All blendfunc behaviour is managed behind the scenes in Kiwi.js. You can request special blendfuncs when necessary, and the rest of the time it behaves smooth and fast.

To understand blendfunc use, you must first understand the concept of Kiwi.js renderers.


A renderer is not the same thing as a rendering engine. This can sometimes be confusing. The engine part is the whole pipeline, and we don’t care about that right now. The renderer itself is a vital part of that pipeline.

A renderer is a chunk of code that handles rendering a certain type of object. It manages shaders and blend modes. Renderers are tracked by the GLRenderManager, the workhorse that takes care of the rendering pipeline.

Renderers are also tracked by game objects. In WebGL rendering mode, each game object has a glRenderer property which points to a renderer.

Instanced Renderers

You can request an instance of a renderer from the manager:

Where rendererID is the name of a renderer in the Kiwi.Renderers namespace.

An instanced renderer is a new copy of that renderer, unique to an object. We don’t actually use instances very often, because there’s a more efficient way to do more tasks.

Instanced renderers are useful for objects which require per-object shader information. For example, the Particle Effect Plugin for Kiwi.js encodes a lot of particles into the shaders on a single object. It is important to have its own renderer instance.

Shared Renderers

Most renderers are shared. This means they don’t need to set per-object information, and can be attached to multiple game objects. This is very useful for accelerating batch rendering. The default renderer is Kiwi.Renderers.TextureAtlasRenderer, which you will probably use for 90% of game objects. When you create a normal game object, the engine requests a shared renderer. In the process it either creates the requested renderer, or passes a reference to it if it already exists. In this way, many objects may point at the same renderer.

You can get a shared renderer (agnostic of whether it must be created or simply referenced) as follows:

For example, if you wanted to access the default renderer, you would call requestSharedRenderer( "TextureAtlasRenderer" ).

BlendMode Objects

Starting in Kiwi.js v1.1.0, the GLRenderManager also tracks blend modes. It does so using a new GLBlendMode object which is attached to every renderer as glRenderer.blendMode.

The render manager uses this object to efficiently manage blend mode switching. We like to minimise direct and costly calls to the video card, so the manager only switches when the mode has actually changed between render batches.

The GLBlendMode object tracks six key pieces of data, with default values denoted:

  • srcRGB (SRC_ALPHA)
  • srcAlpha (ONE)
  • dstAlpha (ONE)
  • modeRGB (FUNC_ADD)
  • modeAlpha (FUNC_ADD)

As you can see, this simply tracks the parameters necessary for setting separate blend equations and functions.

Configuring GLBlendMode

You can set up a GLBlendMode object either on creation, or after creation. Both methods use a config object. You can also call one of several preset configurations.

Config Objects

A config object is a simple Javascript object, defining one or more parameters.

If you specify a predefined mode, Kiwi.js will skip any other parameters and set that mode:

If you specify the six core pieces of data, Kiwi.js will set them. You can use either a string to describe the title, or access the property directly from the GL drawing context. You can even mix and match. While we do not recommend it, the parser can even correct uncapitalised strings and accidental use of international spelling for “colour” (but not on the names of constant properties):

If you specify an incomplete set of data, Kiwi.js will change only those parameters that you specify. It will leave the others as they are.

For a complete list of possible values, see above.

Preset Modes

We have included some useful modes that you can access quickly. To access these, use either the config object { mode: "MODE_NAME" }, or the method blendMode.setMode( "MODE_NAME" ).

Unfortunately, common blend modes as seen in programs like Photoshop are not supported by blend modes on the video card. These serve a different function, and are designed to work with shaders, which can extend their performance significantly.

Viable modes are:

  • “NORMAL”
  • “ADDITIVE” or “ADD”
  • “ERASER” or “ERASE”
  • “BLACKEN” or “BLACK”

Diagram showing blend modes in action

“NORMAL” mode draws things as you’d expect.

“ADDITIVE” mode adds colors together. This is very useful for fiery effects, glows, and other objects that seem to emit light and eventually blow out to pure white. We strongly advise that you consider the exact pipeline of pixels when using this mode. For example, if you input pure red (RGB 1,0,0), it will never turn white, because it has no G or B values. This is what some effects technicians call “programmer colors”, because they’re simple to define but have bad behaviours on an artistic level. We recommend that you use colors with some value in all RGB channels, or in other terms, a saturation below 100%. This may result in slightly muddier colors, but they will blend in much nicer fashion.

The Particle Effect Plugin has an option to use “ADDITIVE” blending. Just one look at the example screenshot, and you can see how great it looks in action.

“SUBTRACTIVE” mode subtracts the src from the dst. This creates an eerie photo-negative effect. A similar warning to that of additive blending must be given: fully-saturated primary colors will not blow out into total darkness.

“ERASER” mode draws a hole in the game. Instead of seeing the object you are drawing, you will see the web page behind the game, as though you have clipped out part of the game canvas itself. This is a useful effect if you are integrating your game into a web layout, and want to create irregular edges. You can also use a transparent stage background, but “ERASER” mode gives you more control over layering. For example, you can eliminate sprites that move outside a certain shape on the screen.

“BLACKEN” mode draws textures as though they were black. Pretty simple. This could be useful for a highly-stylised game, or for creating shadows from color sprites.

There are 15 blend factor options for each of src and dst, and 3 possible blend equations. This makes 675 possible combinations. These five are just those that we found to be useful without further support from shaders. There are numerous other valid combinations.

For example, we found that { srcRGB: “DST_COLOR”, dstRGB: “ONE_MINUS_SRC_ALPHA”, srcAlpha: “ONE”, dstAlpha: “ONE”, modeRGB: “FUNC_ADD”, modeAlpha: “FUNC_ADD” } could create a Multiply effect, darkening those objects beneath it as in the Photoshop blend mode – but only with perfectly opaque or transparent objects. Partially translucent pixels did not blend as expected. This is the only reason there is no Multiply mode in Kiwi.js.

When designing your own shaders, you should always consider the use of blend modes. They may make your life considerably easier.

Contagion and Solution

Note that most sprites share the default renderer. If you try experimenting with these blend modes, odds are good that everything will vanish. This is because you can’t alter one thing without altering everything else.

In Kiwi.js v1.1.0, we’ve provided a new tool to make it easier to play with renderers. It is now possible to clone a shared renderer, creating a new copy which you may then request as a standard shared renderer:

Assign this clone to your experimental objects, and then tweak the new blend mode on the clone.

In Review

Blend modes are a useful tool for creating common visual effects. They are also useful as part of a more complex shader system. Today we’ve discussed the blend mode tools available in Kiwi.js, how they apply to Renderer objects, and how they create the colors you see on your screen. These can be tremendously powerful tools in your arsenal.

Benjamin D. Richards

New Tagging System and Object Selection, Part II (Kiwi.js v1.1.0)

This article builds on topics introduced in Part I.

Collecting Objects

Now that you know about names, uuids, and tags, you can use them to search out and collect objects from your scene. There are several methods for doing this, depending on what exactly you’re searching for.

The Group object (and by extension the State object as well) has the following relevant methods:

  • getAllChildren()
  • getChildAt( index )
  • getChildByName( name )
  • getChildByID( uuid )
  • getChildIndex( child )
  • getRandom()
  • getChildrenByTag( tag )


This method returns a list of all children at all levels. For example, if you put Object1 into GroupA, and then put GroupA into GroupB, GroupB.getAllChildren() would report [GroupA, Object1].

This method has an optional flag, getGroups, which defaults to true. If you set it to false (getAllChildren(false)), the list will omit all group objects, but will still include their contents.

This is a good way to get a list of all objects in a scene:

Note that it will also gather invisible objects. Note also that this may take some time on large scenes, so try to avoid using this method frequently.

getChildAt( index )

This method returns the child at the position defined by the parameter index. This is similar to calling group.members[ index ], but it automatically detects an invalid index and returns null. You can check for null results before processing potentially invalid results.

getChildByName( name )

This method returns the first child defined by the parameter name. If there is no child by that name, it returns null. If there are multiple children by that name, it returns the first one.

By default this searches just the immediate members of the group, not its children. If you supply an additional parameter true, the method will search the entire sub-tree.

By default, objects in Kiwi.js will share names if they use the same texture. This method is most useful if you manually name your objects with unique titles.

getChildByID( uuid )

This method returns the first child defined by the parameter uuid. It works just like getChildByName, with the added certainty that uuids are unique. Add the optional parameter true to search the entire sub-tree.

This method is more useful than getChildByName because of its strong guarantee of uniqueness.

getChildIndex( child )

This method returns the index of a child within a group. This can be useful if you are recording the index of an object as you add it to a group, and intend to use the index using getChildAt later on.


This method gets a random member of the group. It does not select further down the tree, so only the immediate members of the group are selected.

You can specify an optional start parameter. If so, the method selects above that number.

If you specified a start, you can also specify a length. If so, the method selects within that range. For example, group.getRandom(4, 8) will select between indices 4 and 12. This is clamped to a valid range, so you will always get valid members.

This method is useful for introducing unpredictable behaviour. For example, if you want alien warriors to attack the player one by one for reasons of honour, you can select a random challenger.

getChildrenByTag( tag )

This method searches for objects which match the string tag, and returns an array containing all tagged objects.

This is a very useful method. Because objects can have multiple tags, you can perform all sorts of selections in this fashion.

Adding and Rearranging Objects

Once you’ve got your objects, you can perform any kind of operation you want. One possible operation is reordering their position in a group. This can be useful for many reasons. For example, if you’re making a card game, you may wish to move a card into another player’s hand-group. We’ll examine a more interesting system later.

Groups can perform the following operations on their members:

  • addChild( child )
  • addChildAt( child, index )
  • addChildBefore( child, beforeChild )
  • addChildAfter( child, afterChild )
  • removeChild( child )
  • removeChildAt( index )
  • removeChildren()
  • setChildIndex( child, index )
  • swapChildren( child1, child2 )
  • swapChildrenAt( index1, index2 )
  • replaceChild( oldChild, newChild )

In addition, there are a number of advanced methods. We will not examine them in this article, but they are available for coders who are comfortable with contexts and callbacks.

  • forEach( context, callback, params)
  • forEachAlive( context, callback, params)
  • setAll( componentName, property, value )
  • callAll( componentName, functionName, args )

addChild( child )

This is the standard method for adding children to a group. It adds the child to the very end of the members list.

If the child was previously a member of another group, it will be removed from that group before being added to this one. Note that it will not take account of world space transformations, so if the groups are at different transformations, the object will seem to jump.

addChildAt( child, index )

This method adds the child into the members list at the desired position. It will shuffle up later members to make room.

This is a useful method for explicitly setting the order of an object.

addChildBefore and addChildAfter( child, target )

These methods simply add the child into the members list relative to a target child.

This might be useful for adding equipment to a character, displaying an aura around a game element, or other operations that should “be part of” an extant object.

removeChild( child )

This method removes a child from its parent. Unless you subsequently add it to another parent, it will no longer render.

You may optionally specify a second parameter true. This will tell the function to destroy the child completely. This is good practice if you do not intend to reuse the object. If you do not destroy it, the object will still be stored by the State. This can increase memory consumption and may cause your game to become unstable. We recommend that you destroy objects that are no longer needed.

removeChildAt( index )

This performs the removeChild() function on a specific number. This might be useful if you’re running a loop over a group.


This method removes several children from a group. If you do not specify any parameters, it will remove all children.

You may specify a begin and an end parameter. This allows you to remove only a block of children.

If you have specified begin and end, you may also specify destroy to be true or false. It is false by default. If you set it to true, it will destroy the children it removes.

This is a useful function for managing large numbers of objects all at once.

setChildIndex( index )

This method changes the index of a child. It must already be a member of the group. This is useful for reordering a group.

swapChildren( child1, child2 )

This method swaps two children within a group. This will be useful if you do not frequently change order in your group.

swapChildrenAt( index1, index2 )

This method also swaps two children, but identified by index.

replaceChild( oldChild, newChild )

This method replaces a child. It does not destroy the old child. This is useful if you need to simply transform an object in your scene, turning it into something entirely different.

Putting it Together: Depth Sort A Crowd

Here’s a common game scenario. Imagine you have a top-down map with characters walking around. Sometimes the characters walk in front of one another so that their sprites overlap. How can you tell which one is in front? After all, it would look pretty silly if you drew the one in back over the top of the one in front!

By far the most practical solution is to reorder the scene graph so that the uppermost objects are drawn first, giving the illusion that they are slightly further from the camera. This depth sorting is a useful trick.

But to depth sort all the characters, you have to mix them all together in a single group. Imagine that you have alien warriors, static obstacles, human colonists, and the player all running around. Each has different rules. It would be great to sort them all into different groups; this would yield better performance using WebGL batch rendering, and allow you to control them more easily. But for the sake of this example, we’ll assume that depth sorting is more important.

The reader may wish to consider more complex solutions that mix the advantages of batch rendering and depth sorting.

Initial Population

Let’s consider a very simple scene hierarchy. Rather than split up the characters into their own groups, we’ve added them all to a single big group. Finally, we’ve noted their Y coordinates, where Y = 0 is at the upper edge and higher Y values refer to coordinates further down the map. Characters with a higher Y value should be drawn over characters with a lower Y value.

  • State
  • Background
  • CharacterGroup
    • AlienWarrior (Y = 200)
    • Obstacle (Y = 110)
    • Player (Y = 100)
    • Colonist (Y = 50)

This shall be sufficient to demonstrate a depth sorting system.

A Simple Sort

For our first attempt, we’re going to use a bubble sort algorithm. This uses following rules:

  1. Set traversal limit to (list length – 1).
  2. Consider every pair of elements in the unordered list (0-1, 1-2, 2-3 and so forth).
  3. If the second value is smaller, swap the two values.
  4. Stop when you reach the traversal limit.
  5. When you reach the end of the list, reduce the traversal limit by 1.
  6. If the traversal limit is 1 or less, finish.
  7. Otherwise, return to step 2.

This has the effect of “bubbling” the highest value up to the end of the list. As you run it over and over, sequentially highest numbers are bubbled to the top, and the list of numbers to consider grows shorter.

Here’s how you do it:

Note use of the key functions getChildAt and swapChildrenAt.

Bubble Sort In Action

Here’s how the order changes in a run of the program. Compared pairs are noted with [square brackets], and the decreasing limit is designated by a *:

Even though the order was completely backwards, you can see that the sort has precisely ordered it in the Y axis.

Problems with Bubble Sort

Although it’s not too bad on this small data set, bubble sort is actually quite slow. We used it as an example because it’s easy to understand. In real-world applications, however, it would quickly slow down as you added more elements to the scene.

It’s generally accepted that insertion sort is faster on small data sets, and quick sort or merge sort is faster on large data sets. Sometimes you can use several sort algorithms at the same time. It would take a long time to explain the implementation of these algorithms, so I suggest you look them up on your own time. They’re pretty cool.

In Review

Today you’ve learned how to manipulate objects in the scene graph. You’ve also seen how a bubble sort can make sense of a chaotic scene, and seen some reference to advanced methods (the methods we use in-house). This will be very useful if you are building any kind of complex scene.

Benjamin D. Richards

New Tagging System and Object Selection, Part I (Kiwi.js v1.1.0)

Hello, and welcome to the fourth in a series of in-depth articles about Kiwi.js v1.1.0, now available for use at Today we’ll be talking about object selection, and how the new tagging system can help you manage complex scenes. I’m Ben Richards, and we’ll be exploring object selection together. Unlike other articles in this series, I wasn’t part of the development team for these features. But I do sit next to the current team. His name is also Ben. (Ben Harding.)

Who Needs to Read This Post?

Most users. If you’re managing a scene with lots of content, you’ll find these methods useful for quickly accessing specific objects. You’ll also learn something about the scene graph.

Scene Graph

Kiwi.js uses a scene graph. This is a powerful tool for organising your scenes.

Scene Order: First In First Out

When you add game objects to a State, you do so in the order they are drawn. For example, consider the following scene:

  1. Landscape (StaticImage)
  2. Wreckage (StaticImage)
  3. Alien Warrior (Sprite)
  4. Player (Sprite)
  5. Rain (ParticleEffect)

This will draw the landscape first, then draw obstacles and characters on top of that, and finally draw rain on top of everything. This makes sense. You wouldn’t want to draw the landscape over the characters.

Groups: Building A Tree

To organise your scenes, you can add objects to Groups. A Group simply holds a series of objects. When it comes to render order, groups are replaced by their contents, preserving their order.

Groups are also objects, and can be added to other groups. This can quickly lead to very complex scenes. For example, if you add the following objects to a game:

  1. Background
  2. Group Alexander
  3. Group Byron

The groups will not render, but they will be considered in the order they are added.

Now add objects to these groups:

  1. Add Alien Warrior X to Group Byron.
  2. Add Group Cicero to Group Alexander.
  3. Add Alien Warrior Y to Group Alexander.
  4. Add Alien Warrior Z to Group Cicero.

What order will they render in?

The answer is ZYX, even though we added them in order XYZ. How can this be? Let’s look at the new scene graph, indicating the numeric order in which objects will render.

  • (1) Background
  • Group Alexander
    • Group Cicero
      • (2) Alien Warrior Z
    • (3) Alien Warrior Y
  • Group Byron
    • (4) Alien Warrior X

As you can see, it just goes top to bottom.

Use this functionality to preserve order in your scenes.

Note that groups are not rendered objects. They are simply a structure that tells the game to render their children in order. While you can set their visibility and transform them, you cannot change the alpha of a group. You must change the alpha of its children instead.

States Are Groups

Under the hood, a State is just a fancy Group with some extra abilities. When you add children to the State, you’re really just populating a root group. It would be more accurate to represent the scene thus:

  • State
    • (1) Background
    • Group Alexander
      • Group Cicero
        • (2) Alien Warrior Z
      • (3) Alien Warrior Y
    • Group Byron
      • (4) Alien Warrior X

Tools to Preserve Order

We’re already seeing some complicated structures in these examples. But imagine how complex it would become to manage a scene with thirty aliens. How do you access them all – or acquire sub-groups?

Members and Tracking Lists

States and groups maintain membership lists.

Groups have a list called members. This is simply an array of all children on the object. It is not exactly the same thing as all children beneath the object; if a child is a group with its own children, members will only record the group. You can access the members of a group through standard array access methods, e.g.

States are a kind of group, so they already have a members array. You may access this in the same way.

States also have a secret _trackingList array. This is not intended for user access, so try not to touch it or something might break. Internally, it tracks every object that was created while the state was active, which is very close to the set of members of the state and all its sub-members.

You’ll note that this does not make it very easy to access components of a complex scene graph. For example, if you want to check every alien against your player’s position, but the aliens are stored in several different groups, how might you go about this?

Note: It’s confusing to swap between “members” and “children” all the time. We’re planning to deprecate “members” in favour of “children” in the near future.

Identifying Objects

There are three main ways to identify objects: names, IDs, and tags.


Objects are named when you create them. You may name a Group on creation by passing it a string after specifying its state, e.g. new Kiwi.Group(state, "groupName"). Groups without a specific name are called “”. Other game objects are named after the name you gave their texture atlas; for example, consider the following declaration:

The value of is "background", because this is what you named the texture.

Names are not at all guaranteed to be unique. For example, any object you create using the “background” texture will also be named “background”. In general, assume that names are unreliable identifiers.


Objects are assigned a UUID upon creation. The UUID is a Universally Unique Identifier. It’s a long string that looks a bit like "2816f873-c232-4746-b81e-d89aa3332c32".

Unlike names, UUIDs are practically unique. Think of them as random numbers. It is theoretically possible for two UUIDs to be generated with the same value, but the odds are significantly lower than you being struck by a meteorite. For all intents and purposes, a UUID is effectively unique.


Groups and renderable objects can be tagged using new methods in Kiwi.js v1.1.0. A tag is a simple string, and an object can have any number of tags.

You can add a tag to an object using addTag as follows:

It’s pretty simple – you don’t need to create a list or anything, just supply one or more tags.

You can remove a tag from an object using removeTag as follows:

You can check for tags using hasTag as follows:

Tags are very useful for identifying and collecting broad categories of items without having to write your own arcane code to probe the scene. Tags don’t do anything you couldn’t do another way, but they are very handy for common tasks. For example, you might tag several types of object with “enemy”.

We’ll explain how to use this information in part II.

In Review

Today we’ve learned about the scene graph and object identifiers.

Next Time

In the next article, we’ll look at ways to use this information.

Benjamin D. Richards

New WebGL Texture Pipeline

Hello, and welcome to the third in a series of in-depth articles about Kiwi.js v1.1.0, now available for use at Today we’re talking about the texture pipeline, the way the engine runs images through the graphics card. I’m Ben Richards, and I’m very excited about the new possibilities we’ve opened up.

As the title implies, this post is all about WebGL features. The canvas rendering system is much simpler: textures are simply available at all times.

Who Needs to Read This Post?

Advanced users. If you’re planning on making your own shaders, this will make certain requirements more straightforward. If you’re generally interested in WebGL, you’ll find this interesting. If you’re just using pre-built shaders, this is somebody else’s problem.



A shader is a piece of code that runs on the graphics card, producing pretty graphics on the screen. Shaders do everything in WebGL graphics. The simple, flat textures that you draw in standard Kiwi use shaders. Rich, three-dimensional objects with complex geometry and intricate lighting also use shaders.


Advanced WebGL shaders need to use several textures at once. For example, in a normal map lighting shader, information is combined from a color map and a normal map.

This is easy to implement within the fragment shader. One must simply declare a uniform (a kind of variable that is set before the object is drawn) inside the shader code, for example:

But how can we provide two textures to the shader at once? For that matter, how can we provide one texture?

Let’s find out.

Old Texture Pipeline

The old pipeline forms the foundation of the current texture system. It is broken into two parts: initiation and per-batch.


This part fires once, during the creation of a State. Prior to this sequence, textures have been loaded from files and are stored in TextureAtlas objects (we shall assume this happens by the benevolent magic of the file system kobolds). These atlases are then stored in a TextureLibrary object. At this point, the GLRenderManager.initState() method calls _textureManager.uploadTextureLibrary(gl, state.textureLibrary), and the following steps are performed:

  • GLTextureManager steps through each property on TextureLibrary.textures.
  • This calls GLTextureManager.uploadTexture(gl, textureLibrary.textures[tex]). (At this point, the image itself enters the pipeline.)
  • uploadTexture creates a GLTextureWrapper(gl, textureAtlas), runs _addTextureToCache(param: the new wrapper), and sets textureAtlas.glTextureWrapper to the new wrapper. This wrapper communicates between the atlas and the graphics card.
  • It then runs _uploadTexture(gl, glTextureWrapper) to attempt to upload it into memory. (A memory error or overflow at this point will cause failure.)
  • This calls GLTextureWrapper.uploadTexture(gl). This will perform several key steps:
  • Create a gl texture, a reserved area in video memory.
  • Record the location of that gl texture.
  • Bind that location, informing the card that we are using that gl texture.
  • Set parameters on the gl texture.
  • Unbind the location, informing the card that we are no longer directly addressing that texture.
  • Finally, the texture wrapper sets _uploaded = true to signify that it’s in memory. The sequence is complete.


This part fires once per render batch. (A render batch is a package of render operations that use the same texture and renderer.) The GLRenderManager performs the following pertinent steps:

  • renderBatch() calls _switchRenderer(), which calls Renderer.enable().
  • Renderer.enable() is on the Renderer, which is configurable via plugin Renderer objects. In the case of the default TextureAtlasRenderer, it sets a uniform sampler to 0. This is important.
  • renderBatch() then calls Entity.renderGL(). This calls Renderer.addToBatch() or .concatBatch(), and may include texture work (such as in TextField objects). This is configurable via plugin GameObjects.
  • renderBatch() then calls _switchTexture().
  • This performs the call this._currentRenderer.updateTextureSize(gl, new Float32Array([this._currentTextureAtlas.glTextureWrapper.image.width, this._currentTextureAtlas.glTextureWrapper.image.height])). In other words, it explicitly requires the renderer to update the resolution recorded on the video card.
  • _switchTexture() then calls GLTextureManager.useTexture(gl, entity.atlas.glTextureWrapper). Note that this is explicitly passing a single texture per batch, no more.
  • _switchTexture() finally calls Renderer.refreshTextureGL(), which is again configurable.
  • renderBatch() then calls Renderer.draw(), which is again configurable. This uses texture data to draw your image to the screen.

Problems with Old Texture Pipeline

This pipeline is perfectly functional for single-texture objects. However, it prevents the use of multi-texture objects at several key locations. This in turn prevents the use of particularly nice advanced shaders that require multi-texture inputs to properly evaluate.

Key problems include:

  • GLTextureManager.uploadTexture method cannot upload more than one image per atlas.
  • GLRenderManager._switchTexture method assumes one image per entity when calling GLTextureManager.useTexture method.
  • GLTextureManager.useTexture method can only write to texture unit 0, preventing the use of textures in parallel.

What are Texture Units?

These turn out to be central to the entire operation. Recall that a shader uses uniform sampler2D inputs to access textures. How do we set these? It’s as simple as sending the graphics card a single number, as in this excerpt from the Kiwi.js library:

This is the very important 0.

You see, while many textures can be loaded into memory at once, WebGL only gives you access to up to 32 at a time. These active textures are called texture units. They are named TEXTURE0 through to TEXTURE31.

When you activate a texture just prior to drawing graphics, you assign it to a texture unit. For example, again drawing from Kiwi.js:

This does two things. First, activeTexture tells the video card that texture operations are going into a particular texture unit. Second, bindTexture tells the video card to attach the current texture unit to a particular area of video memory where a texture is stored.

It is possible to attach up to 32 textures at a time using these texture units. In practice, most texture calls only require unit 0 to be bound. Advanced shaders may add additional units, although in my experience it’s uncommon to require more than a half dozen or so.

It would be great if the texture pipeline supported the use of multiple texture units.

New Texture Pipeline

Well, as it happens we made a new texture pipeline which does support multiple texture units. This adds some functionality deep within the engine, while retaining more or less the same series of steps that the old pipeline followed. This new behaviour is as follows:

  • GLTextureWrapper.image is now a getter/setter, permitting the inclusion of new textures after initialisation.
  • TextureAtlas now has a createGLTextureWrapper() method. This allows the atlas to create its own wrapper and register it to the GLTextureManager. This is essential for multitexturing, because a multitexture atlas needs to create several wrappers, and the manager cannot apprehend this need.
  • GLTextureManager now has a registerTextureWrapper() method. This allows atlases to register multiple wrappers.
  • TextureAtlas now has an enableGL() method. This is also essential for multitexturing. It requests the GLTextureManager to use its image (or images) within specific texture units. We considered implementing this on the GLTextureWrapper itself, removing WebGL-specific code from the more generic TextureAtlas, but it was not possible for the wrapper to request a specific texture unit without external code telling it what to request. It made more sense to put the code in one place on the atlas.
  • GLTextureManager.uploadTexture now uses the new createGLTextureWrapper() systems on the atlas, rather than manually creating a single wrapper as outlined above.
  • GLRenderManager uses the new enableGL() atlas systems, rather than manually updating a single texture.
  • GLTextureManager.useTexture now accepts an optional textureUnit parameter to specify which texture unit to use. If this parameter is not specified, it defaults to 0. In addition, it is not called directly from the GLRenderManager, but rather from the enableGL() method on the atlas. This allows multiple units to be active at once.

This results in a slightly different per-batch render sequence, where altered steps are emphasised:

  • renderBatch() calls _switchRenderer(), which calls Renderer.enable(). Custom renderers may set uniforms to address several texture units.
  • renderBatch() then calls Entity.renderGL(). This calls Renderer.addToBatch() or .concatBatch(), and may include texture work.
  • renderBatch() then calls _switchTexture().
  • This calls atlas.enableGL(). You can use a configurable plugin atlas. The default TextureAtlas performs the following steps:
  • Updates texture size.
  • Calls GLTextureManager.useTexture() on texture unit 0.
  • Removed: _switchTexture() finally calls Renderer.refreshTextureGL(), which is again configurable.
  • renderBatch() then calls Renderer.draw(), which is again configurable. This uses texture data to draw your image to the screen.

This shows how it is now straightforward to write plugins to perform advanced texture unit management. Because your texture management is performed by the atlas itself, you need simply write a new atlas, and tell it to use as many textures as necessary. The texture manager need not know anything about its contents; it need simply do as it is told.

“Dirty” Textures

Because it is computationally expensive to upload or activate textures, we perform several checks to see whether it is necessary during _switchTexture. This includes the following checks:

  • Is the texture different to the last batch?
  • Is the renderer different to the last batch, requiring an update to its texture parameters?
  • Is the texture dirty?

If any of these are true, then the library does the extra work to prepare the texture for rendering. If they are all false, then the texture is logically ready and needs no extra work.

A “dirty” texture is simply flagged for update. We do this by setting atlas.dirty = true. This is only important if you’ve actually changed the texture itself. For example, the TextField object in Kiwi.js might change its text; this would involve changing the image, so the object sets dirty and the renderer picks up the changes.

Using the New Pipeline

Right now, the pipeline just works. It does everything the old pipeline did. You will have to write your own renderers (it is easy to base them on TextureAtlasRenderer) if you want to implement multi-texture solutions.

However, we are working on plugins to utilise this new functionality. Two ideas so far include the MultiTextureAtlas and the MegaBatchRenderer.


This plugin is well under way. It is simply a modified texture atlas with an array of textures, designed for use in normal map lighting and other shaders. When we roll out some exciting new shader plugins in the near future, MultiTextureAtlas will be released alongside them.

If MultiTextureAtlas proves successful, we will probably fold its functionality into the core Kiwi.js library in version 1.2.0. It’s just too useful not to have.


This is still in the concept stages, and may require additional engineering under the hood. The basic idea is an accelerated batch renderer. If you remember the article on the new render pipeline, you’ll recall that Kiwi.js speeds up rendering by combining similar items into a single draw call. However, objects with different textures must be split into different batches.

MegaBatchRenderer would load multiple textures into memory at once, using all 32 texture units. In theory, this could allow very complex scenes to render with a single draw call. In practice, it may be difficult to swap between several textures. Branching behaviour is expensive on a GPU (in essence, this means that video cards slow down if you ask them to make choices). This solution would also entail some overhead on shaders themselves, allowing them to combine more freely. While this might be effective, it would make the shaders considerably less portable. In its current implementation, Kiwi.js is very shader-friendly; you can import and export shaders with a minimum of effort.

In Review

You’ve now read about texture units, and how we can use them to work with multiple textures. You should also have an idea of how to work with the texturing system to support your own creativity.

Frankly, this is pretty esoteric. It will, however, prove useful when we come back to talk about making your own shaders at the end of this series. That will be exciting.

Benjamin D. Richards.

New Transform Aliases (Kiwi.js v1.1.0)

Hello, and welcome to the second in a series of in-depth articles about Kiwi.js v1.1.0, now available for use at Today we’re talking about transform aliases and how they make your life easier. I’m Ben Richards, and I’ve spent time with the biggest games being made in Kiwi.js, figuring out how to accelerate the coding process. I’m looking forward to sharing these improvements with you.

Who Needs to Read This Post?

All users. Kiwi.js is all about moving objects around the screen, and transform aliases make this quicker and more flexible than ever.


What’s a Transform?

Before discussing aliases, we must first understand the Transform object to which they refer.

The prototype Kiwi.Geom.Transform holds and implements information about transformation. This includes the following fundamental properties:

  • x
  • y
  • rotation
  • anchorPointX
  • anchorPointY
  • scaleX
  • scaleY

This tells you everything you need to position and lay out an image.

These properties are combined to create a transformation matrix, an efficient method for applying hierarchical transformations. The matrix is beyond the scope of this discussion, but it is the ultimate destination for transformation information.

Rotation and Anchor Points

If you’re used to degrees, I’ve got bad news. Mathematics and computer graphics don’t use degrees; they use radians. One radian is simply the distance along a circle’s circumference equal to the radius of that circle. It’s about 57.3 degrees. Because circumference equals 2 * pi * radius, a full circle of 360 degrees is precisely 2 * pi radians. A half circle is precisely pi radians, and a quarter circle or right angle is precisely pi / 2 radians.

You can either learn to love radians, or use built-in conversion functions in Kiwi.js. Simply call Kiwi.Utils.GameMath.degreesToRadians( degrees ) or Kiwi.Utils.GameMath.radiansToDegrees( radians ) to convert an angle back and forth.

Rotation does not occur about the x,y position of the object. That coordinate actually maps the top left corner. Rotation (and scaling) happens on an offset, usually set to the middle of the object. This behaviour looks right. The offset is described by anchorPointX,anchorPointY. You can change it, and this is useful for some desired effects.

Diagram explaining anchorPoint paradigm

NOTE: anchorPointX,anchorPointY nomenclature is actually an alias. We will discuss this later in the article.

Object Space Transforms

Kiwi.js has a hierarchical scene graph. This means you can add game objects to a Group, and then transform the Group to transform all the objects inside it as well.

How does this affect the object transform? For example, if you rotate the parent Group by 3, does the child’s rotation also equal 3?

Actually, the child’s rotation remains unchanged. Only the parent transform is altered. Kiwi.js preserves the child’s transform data, and compiles the parent data into the transform matrix behind the scenes. We call this child’s data object space transformation, because it doesn’t care about the wider world. It is an extremely useful model. You just need to worry about where your object is standing relative to its parent; you can change that context by transforming the parent, and the object need never know.

For example, you could create a pirate and a pirate ship, add them to the same Group, and then animate the Group to bob and sway with the high seas; the pirate will appear to be fixed to the deck of the ship. In fact, if you move the pirate left and right, they will appear to walk along the deck even if the ship is swaying back and forth. You could go further, and add the bobbing ship to another group to represent ocean currents, add that to another group to represent the tides, another group to represent the Earth spinning, yet another group for the Earth moving round the Sun, and so on. This is getting a bit silly, but the point remains: even if you do all this astronomical transformation, you can still just move the pirate left and right, without accounting for solar peregrinations and epicycles. Isn’t that handy?

What are Aliases?

They’re shortcuts to frequently-used data.

Many objects in Kiwi.js have a Transform object. This manages transformation: position, rotation, and scaling in space. However, it is not always intuitive for you to have to call pirate.transform.x every time you want to move a character. Accordingly, shortcuts or aliases are implemented on several game objects. Now you may simply call pirate.x. Kiwi.js automatically transfers that information to the Transform, and your pirate moves left and right.

Old Aliases

Previous versions of Kiwi.js did not provide consistent access to aliases. The Sprite object had rotPointX and rotPointY properties, while the Group object did not. The TextField object had width and height properties, but did not define them. The Transform object itself had a simple scale alias, but this was not accessible elsewhere. And so on.

All the information was there, but sometimes you had to dig a little to get at it. We needed to make it easier and more intuitive.

New Alias Standard

In Kiwi.js v1.1.0, we’ve implemented a robust new standard for aliases. All transformable objects except cameras have the following properties:

  • x
  • y
  • rotation
  • anchorPointX
  • anchorPointY
  • rotPointX
  • rotPointY
  • scale (SET only)
  • scaleX
  • scaleY
  • worldX (GET only)
  • worldY (GET only)

Anchor Point

The anchorPointX, anchorPointY properties alias to the original properties rotPointX, rotPointY. The term “rotPoint” or “rotation point” is potentially confusing, because it describes both rotation and scale. Alternative names were discussed, including “transformPoint”. However, transformPoint implied that it would also affect x, y; and there was already a method on Transform called transformPoint(), used for projecting coordinates via a matrix. In the end, “anchorPoint” was determined to be most logical and accessible. The terminology was inspired by Adobe After Effects, where it serves an identical function.

rotPointX, Y still works, but will be deprecated in favour of anchorPoint nomenclature in future major version.

Scale and worldXY: Set/Get Only Aliases

These aliases do not correspond to persistent, singular values.

scale sets both scaleX and scaleY. Most of the time you will want these to be paired. However, because we cannot guarantee that they are the same, scale does not have a value that you can read. You can only use it to set values.

worldX and worldY break out of object space and tell you precisely where in the world your object is located. These are calculated behind the scenes using matrix operations. You cannot set world values in Kiwi.js v1.1.0.

We only provide world location, not world scale or world rotation. This is because of the nature of transformation matrices. While they are created from location, rotation, and scale data, they can quickly twist into shapes that are simply too complicated for simple rotation and scale values to describe. It’s a stroke of luck that the laws of mathematics yield even position values.

New Functionality

We have added several functions that are not standard aliases, but nevertheless allow you to perform useful functions. These include width and height properties, scaleTo methods, and the centerAnchorPoint method. We also discuss the camera and the use of shear.

Width and Height

This applies to Entity-type objects: Sprite, StaticImage, TextField and TileMapLayer. It does not apply to Groups or Cameras.

All entities now have a width and height. By default, these refer to the pixel dimensions of their source image. They do not change when the entity is scaled.

TileMapLayer is an Exception

While TileMapLayer objects have width and height, they do not give pixel dimensions. Instead these properties refer to the number of tiles in the map. You can get pixel width and height values akin to those of other entities using the properties TileMapLayer.widthInPixels and TileMapLayer.heightInPixels. This was implemented early and unfortunately cannot be changed without breaking the current API.

scaleToWidth and scaleToHeight

This applies to Entity-type objects: Sprite, StaticImage, TextField and TileMapLayer. It does not apply to Groups or Cameras.

All entities now have the methods scaleToWidth( value ) and scaleToHeight( value ). These set the scale of the entity so it matches that width or height. This is based on the ratio between value and (width or height). You can call one of these methods once, to uniformly scale an Entity to a desired dimension. This does not alter the width or height properties. As it affects scale, this transformation remains persistent after one call. It is useful when setting up objects.


This applies to Entity-type objects: Sprite, StaticImage, TextField and TileMapLayer. It does not apply to Groups or Cameras.

All Entities have a new method: centerAnchorPoint(). This moves the anchorPoint (rotPoint) to the geometric center of the object: [width/2, height/2]. This is a convenience method for quickly aligning objects.

On TileMapLayer, this method functions as expected, but it uses widthInPixels and heightInPixels instead.


We discussed implementing a shear function on Transform or Matrix. This would permit some new visual possibilities. However, it would do two undesirable things.

  • Adding this parameter would increase the cost of matrix operations.
  • We plan to implement glMatrix.js for high performance matrix mathematics in future releases, and this does not have a shear function. Adding shear now would entail taking it out later.

Ultimately we decided against implementing shear.

You can create your own shear effects by placing an object inside a group, rotating the object, and scaling the group in one axis.


The Camera has a Transform, but it has no aliases. Because the Camera is not frequently animated, and its default values are largely unchanging, we can consider any transformation of the camera to be an advanced topic. Users should therefore know exactly what they’re doing if they access Camera.Transform.

In Review

You should now understand the principles of object space transformation. You can use a wide range of aliases to manipulate objects, and use new methods to configure their transformation. These were much-demanded features, so I know you’ll find them useful!

Benjamin D. Richards

New Render Pipeline (Kiwi.js v1.1.0)

Hello, and welcome to the first in a series of in-depth articles about Kiwi.js v1.1.0, now available for use at I’m Ben Richards, and I’ve been elbow-deep inside this beast for a couple of months. I figure I’ll show you around.

Who Needs to Read This Post?

Advanced users. If you want to write plugins to create custom GameObjects, new renderers, or related subjects, this is important. If you’re curious about WebGL rendering, or want to squeeze more performance from your games using best practices, you’ll find this interesting. If you think Kiwi does everything you need already – and we hope you do! – then this is probably too much information.

What’s a Render Pipeline?

It’s a piece of jargon. It refers to the process whereby objects are translated from the numbers you set in memory (like ninja.x = 600), into images on the screen (like a ninja standing on the right). From the user’s perspective, this happens automatically. But from an engine point of view, there are a lot of steps we must follow to make those images appear. As your data moves through those steps it is transformed, much like an industrial product streaming through a pipe. And much like that industrial product, we’d like to pump as fast as possible.

Kiwi.js has two complete pipelines under the hood. The first, the canvas renderer, uses the canvas element as defined by the HTML5 specification. This has the widest availability, and is easiest to implement. The second renderer uses WebGL, through which your browser talks directly to the graphics hardware, allowing faster and more flexible graphics. For obvious reasons we default to the WebGL renderer. However, some browsers do not yet support WebGL, so for maximum compatibility we provide the canvas renderer.

What’s a Batch Renderer?

How did you…? Never mind, it’s a good question. Batch rendering is key to getting the most out of Kiwi.js WebGL performance.

You see, drawing WebGL graphics has a cost. We could draw every sprite in the game one by one, but this would be very slow. Instead, we use two pieces of information to create batches. All renderable objects have an atlas (the texture file) and a renderer (the system that draws it to the screen). A batch is formed when two game objects that would be drawn in sequence have the same atlas and renderer. Kiwi.js automatically loads everything in a batch onto the video card in one operation, ensuring we make as few calls to the card as possible.

Obviously, you can optimise this process by putting together your game properly. This means proper scene graph management. The scene graph is simply an ordered hierarchy of everything visible in the current State, and it is drawn beginning to end. This means the last object you add to the State is drawn on top, and so on.

Consider a scene constructed thus, where every object type has its own texture:

  1. Ninja
  2. Ronin
  3. Background
  4. Ronin
  5. Ninja
  6. Ronin
  7. Player

This is a terrible scene graph, and I should be ashamed for ever making it. The first two characters are behind the background, so you’ll never see them. And the various ninja and ronin are not added in any particular order, so they will all form individual batches. There are 7 objects, and it will take 7 batches to render them all. In a large scene, this could cause considerable slowdown.

Instead, consider using Groups thus:

  1. Background
  2. Ninja Group
    1. Ninja
    2. Ninja
  3. Ronin Group
    1. Ronin
    2. Ronin
    3. Ronin
  4. Player

This is a much better scene graph. Now we can add enemies to dedicated groups, and the game engine will automatically collate them into efficient batches. This scene will render with 4 batches.

You can also improve performance by combining atlases. An atlas correlates to a single file. When you create your images and sprite sheets, consider combining them into larger files. If ninja and ronin shared a sprite sheet, the optimised scene could put them into the same batch, reducing the batch count to 3!

Just be careful when combining atlases. Some devices cannot handle resolution above 2048×2048, so regard this as the absolute limit.

Now that we know about batches, we can talk about pipelines.

The Old Render Pipeline

Kiwi.js version 1.0.0 was a fine piece of code. We held onto its canvas renderer essentially unchanged, and almost all of the WebGL pipeline remains intact.

Here’s how things went down for a single batch under WebGL renderer 1.0.1:

  • Switch texture to the current batch’s texture. This makes the texture available in video memory.
  • Switch renderer to the current batch’s renderer. This sets up shaders on the video card.
  • Clear current renderer. This primes data arrays to receive new information.
  • Call renderGL() on all members of the batch, in sequence. The main purpose of this method is to deliver information to the current renderer’s data arrays. It is also a good place to perform just-in-time operations such as texture updates.
  • Call refreshTextureGL() on the current texture atlas. This ensures that the texture is properly updated in video memory.
  • Call draw() on the current renderer. This tells the video card to draw everything to the screen.

Reasons For Change

We first noticed problems when examining bugs in the TextField object. This object does something unusual: you can change its text mid-game. And we asked ourselves, where does this re-drawn text actually appear in the pipeline?

You see, the TextField only redraws itself in its renderGL() method. This happens after the texture is switched. This could be a problem: if the texture that was just uploaded is no longer correct, can we trust the graphics we draw to be timely?

In addition, while it’s not mentioned above, the v1.0.1 pipeline actually had two pipelines. Some entities could not render as batches, and were sent to a slightly different sequence of steps to achieve the same result. This seemed like extra effort and complexity.

The New Render Pipeline

We decided to re-order the render pipeline. After all, all the pieces worked fine. They just needed a little bit of re-ordering.

Here’s the new pipeline:

  • Switch renderer to the current batch’s renderer.
  • Clear current renderer.
  • Call renderGL() on all members of the batch, in sequence.
  • Switch texture to the current batch’s texture.
  • Switch blend mode to the current renderer’s blend mode. This is a new feature and will be examined in more detail in a future article.
  • Call draw() on the current renderer.

This new order doesn’t look very different, but it makes several operations much clearer. We could consider it as an even simpler series of steps:

  • DRAW

This clearly delineates the responsibility of each step, and makes it clear what should be done where in the process.

In addition, the pipeline was unified into a single flow for both batchable and non-batchable objects. After all, a non-batchable object is just a batch of length 1.

Implications For Game Objects

You may now perform any number of useful operations in the renderGL() method, and they will be reflected in the rendered graphics. Some illustrative examples follow.

Dynamic Textures

You can redraw your textures every frame, perhaps by drawing on an HTML5 canvas, or by rendering video or capturing a webcam feed. This is how we render text efficiently; because there are so many parameters to change (the text itself, size, alignment, colour, font etc), we cannot redraw it every time one parameter changes. Instead we mark it as dirty, and then during renderGL() we do a single redraw operation. This data travels to the video card and all is well. If the object is hidden, it doesn’t even get to that step! Minimum effort, maximum performance.

Protip: Duplicate an atlas for peak performance! If you want to duplicate the same text field into several places, such as a series of identical signs or monitors in a level, try something like this:

Per-Object Shaders

Sometimes you want to update a shader based on an entity’s own properties and circumstances. But don’t shaders come before per-entity calls? Yes, they do. Does that mean you can’t set shader uniforms in renderGL()?

Actually, it’s fine.

You can continue to set shader uniforms in renderGL(). However, be aware that this only makes sense if you are rendering a single object, because all objects in a batch are processed in a single step before the batch goes to draw. If you are designing a game object that does this, you must design a non-batch renderer. This need simply have the property isBatchRenderer = false. We’ll be talking a little more about renderer design in a later article.

We create our Particle Effects Plugin as a non-batch renderer. Numerous particles are rendered as part of a single game object. As part of this trickery, we need to set unique animation parameters on that object.

In Review

Today we’ve discussed batch renderers, and how to get the most out of your scenes; and how the renderGL() method on game objects is a useful place to run per-entity operations. I hope you find it useful.

Benjamin D. Richards

Introducing Kiwi.js Version 1.1.0 “Shigeru”

Hello, and welcome to the release of Kiwi.js v1.1.0! I’m Ben Richards, part of the core engine development team here, and we thought we’d take the time to tell you why v1.1.0 is the best Kiwi.js yet.

This is the first in a series of Kiwi.js v1.1.0 posts, in which we’ll go into depth about new features and under-the-hood enhancements. If you don’t know what Kiwi.js is, now is a great time to find out at, and perhaps get involved with development yourself at!

Today we’ll be giving you an overview of v1.1.0, explaining what this version is for, and giving a taste of some new features. So without further ado, let’s see what’s up!

What’s the Plan?

Kiwi.js v1.1.0 is versioned as a “minor release”. Using semantic versioning, this means we are allowed to add new functions and perform bug fixes, but we can’t remove or change old functions. Your v1.0.1 Kiwi.js projects should continue to perform just as they did before – but maybe faster and with fewer bugs.

The purpose of this minor release is to make animation and WebGL graphics easier and more flexible. To that end, we’ve added several useful features, both user-facing and internally. The bit that gets me excited is our rebuilt WebGL rendering pipeline, which has just blown the doors off future possibilities.

With v1.1.0 complete, it’s possible to make a huge range of plugins to do amazing things. And we’ve got plans. Our main plan is to pioneer new features as plugins. It’s a good way to introduce these features, because it doesn’t impact core functionality, and it allows us to revise the API on those features after seeing them behave in the wild. If we like the feature, we can add it to core in a future minor release. Plus, once we put something in, we can’t take it out until the next major release – version 2.0.0. Semantic versioning requires a stable API (application programming interface) and backwards compatibility within major versions. With plugins, we can explore our options and only put it into core once we have an API that we like.

What sort of plugins? Oh, little things like advanced physics, lethal AI, multiplayer with all your friends, and real-time lighting like you’ve never seen before.

New Stuff

There’s so much interesting stuff in Kiwi.js v1.1.0 that we couldn’t possibly fit it all into one blog post. (Well, we could, but your brain would explode. And we respect your carpet far too much to let that happen.) So we’re splitting it up into a series of posts.

Your favourite part of this series will depend on what you’re doing with Kiwi.js. Some parts will be quite technical – maybe you’ll learn something, or better yet, you’ll find a place where we messed up. (And tell us. We like that part best.) Other parts will be more accessible, describing some cool ways to use new features.

Before we get into the new features, we’ll go over a series of little things that don’t merit an entire blog post on their own. After that, we’ll summarise upcoming articles. The series will alternate between technical and API topics. We have a lot to say on the technical side; we hope this will prove a useful resource to other WebGL developers, as well as showing you how best to interface with Kiwi.js if you’re into shaders and other advanced topics.

Let’s get started!

The Little Things

New Examples Repo

Kiwi.js has plenty of examples, showing all sorts of useful things. Too many, in fact! All that goodness was bloating the core Kiwi.js download on Github.

So we’ve moved the examples to a new repo. If you want to see Kiwi.js in action, grab that repo, fire up your localhost server (we suggest WAMP or MAMP), and see what Kiwi.js can do.

WebGL Default Renderer

Like it says, WebGL is now the default renderer. Blazing fast graphics are even easier than ever. The moment you create a game, it uses WebGL. Don’t worry if your browser doesn’t support WebGL; Kiwi.js will detect a problem and fall back to Canvas mode.

Kiwi.js now has the following renderer options, which you can use as values for “renderer” in a gameconfig object:

  • Kiwi.RENDERER_CANVAS (fallback renderer, guaranteed to work on any HTML5 browser)
  • Kiwi.RENDERER_WEBGL (default high-powered renderer with all the best features)
  • Kiwi.RENDERER_AUTO (autodetect WEBGL if it is present, CANVAS if not)

Camera Resets

When you swap States, all cameras in the game reset to default. (This means they set x,y = 0, rotation = 0, scale = 1, rotPointx,y = width/2,height/2.) This is important, because 99% of the time you’re using the default camera. If you move that camera and then swap States, you might find yourself looking at the wrong part of the new State.

This reset happens automatically. If you want to call it yourself, use either game.cameras.zeroCamera( camera ) or game.cameras.zeroAllCameras() (where game is your game object and game.cameras is a CameraManager object).

Alpha On Stage

It is now possible to make the game background transparent, allowing you to see the rest of the webpage behind it. This can be useful if you want to integrate Kiwi.js objects with the rest of your content, although you’ll probably need to do plenty of coding to pull it off.

To set a transparent background on the Game object game, call game.stage.color( "ffffff00" ) or game.stage.rgbaColor( {r: 255, g: 255, b: 255, a: 0} ).

To set the background opaque, call game.stage.color( "ffffffff" ), game.stage.color( "ffffff" ), game.stage.rgbaColor( {r: 255, g: 255, b: 255, a: 255} ), or game.stage.rgbColor( {r: 255, g: 255, b: 255} ).

Note that the game will assume a fully opaque alpha if you call rgbColor or set color to a six-digit hex value. Note also that using rgbaColor requires an alpha parameter in the range 0-255, not 0-1 as may be used in some color formats; this is to preserve internal consistency.

Thanks to user Viriatos for asking about alpha on stage.

Off-Axis Tile Maps

It is now possible to freely transform TileMapLayer objects. You may reposition, rotate, and scale them freely. This can be very useful for creating interesting backgrounds.

However, we haven’t yet managed to get physics to follow them. You should not use transformed tilemaps as interactive objects just yet. We’re working on advanced physics systems to overcome this problem. For now, explore the aesthetic possibilities.

Goodbye, willRender

Some objects had a willRender flag to control their visibility. This is deprecated in favour of the visible flag. Because we’re not putting out a major release, willRender still works – we simply advise users not to use it, and redirect calls to visible. This simplifies an unnecessary divergence of function.

That’s it for the small things. Now for the big stuff.

New WebGL Render Pipeline

We’ve reordered the WebGL rendering pipeline in the name of greater accessibility. We already had a super-fast batch renderer, which would combine objects with identical texture and renderer settings into a single draw call. But now each batch follows a series of steps:

  • Enable renderer. Here you can set shader uniforms.
  • Clear vertex buffers.
  • Call per-object renderGL() methods. Here you can run per-object graphics operations and define geometry.
  • Upload texture/s to graphics card.
  • Manage blend mode.
  • Draw to screen.

In the render pipeline article we’ll explain what we used to do, why we changed it, and how you can use this to make powerful game objects.

New Transform Aliases

After consulting the needs of game designers, we’ve come up with some new shortcuts which we think you’ll find most useful. Game objects now have a unified set of aliases to transform properties, and we’ve added some new functions and names to boot. Want to scale an object in both x and y at once? Does anchorPointX sound cooler than rotPointX? Just how incredibly useful are transforms anyway? All shall be revealed in this article.

New WebGL Texture Pipeline

When it comes down to it, all of Kiwi.js is a system for drawing textures onto the screen. Thus far we’ve given you tools to draw with one texture at a time. But what if you could draw data from multiple textures at once? What if you could combine data, make it do more than just replicate the same set of colors? This is the domain of WebGL shaders. And we’ve taken all the hard work out of setting up your WebGL environment to drive advanced shaders.

In this article we’ll examine the flow of textures through the render pipeline and explain how they are arranged on the video card.

New Tagging System and Object Selection

Add abstract tags to game objects! Label friends and foes, set moods, inform an enemy that they are now on fire – you can do all this and more without developing new classes and functions.

In this article we’ll talk about ways to identify and sort your game objects. We’ll also look at the scene graph, and explain how it orders your assets.

New WebGL Blend Mode System

Users of our Particle Effect Plugin have seen how cool additive blending can be. Inspired by this, we’ve put blend modes into the core engine. But that’s not all; we’ve given you more control over renderers, so you can set up shader groups, clone renderers, and generally make your game objects look exactly how you want them.

In this article we’ll talk about WebGL blend modes, what they can (and can’t) do, and how to manage your renderers the way you want.

New Time Properties

Kiwi.js is super fast, but we’re ultimately at the mercy of the browser. If your game starts to run slow, whether it’s because of a pop-up from another site or because you’ve implemented ray-traced 3D global illumination in your game (and good on you), you don’t want your game to go into slow motion. Or maybe you do want some slow motion! Whatever the case, we’ve got your back with a range of new time parameters.

In this article we’ll explain the consequences of frame rate, and how you can create a whole range of super smooth animations with our new time properties.

New WebGL Shader System

We’ve looked at the new render pipeline, the new texture pipeline, and the new blending system. These are pretty awesome on their own, but they’re a hundred times better when you use them to fulfill their destiny. From the beginning, these improvements were intended to support the creation of advanced new shaders.

In this article we’ll look at the process for creating new shaders, from beginning to end. How do you write a shader? How do you write a plugin renderer? How do you apply them to objects in game? All this shall be answered, and it will be max wicked.

Change Log

Check out the readme on Github for the complete change log!

Beyond v1.1.0

What’s next for Kiwi.js? Unfortunately, your security clearance isn’t high enough to know all the details. But we have a pretty good idea of where we’re going. There will probably be a v1.1.1 targeting implementation problems on the more specialised platforms, like CocoonJS. We’ll test a series of exciting new features using plugins. And me? I’m thinking about v2.0.0. Two words: Deferred Renderer.

See you next time, when we talk about the new WebGL render pipeline!

Benjamin D. Richards