Resource entanglement problems and the problem of elegance

Category : Games, Meta-Thinking, Productivity, Rationality · No Comments · by Sep 22nd, 2015

I have a problem for you:

Suppose we want to maximize our combined number of X and Y products over 10 days, and each day we can either:

  • Get 3 copies of X if Y<7, otherwise get 4 copies of X.
  • Get 2 copies of Y if X<5, otherwise get 3 copies of Y.

Over 10 days, how should we structure our choice to obtain the maximum number for X + Y?

If you have a strategy bent, then this problem will be a bit of fun, and a bit more complex than it seems at first. This is a type of problem that strategy and resource management games depend on often – when multiple resources have growth patterns that are co-dependent, simple rules can give rise to unexpectedly nuanced problems. Even better, the formula is pretty modular and simple to tweak, so it’s easy to introduce variations.

I call these type of problems resource entanglement problems, since their hallmark is multiple valuable resources with co-dependent growth curves. This type of problem rely on straightforward goals and rules to attract new players and keep those players with unexpected depth. The high ratio between depth and complexity of rules is a hallmark of good game design.

But I’m not here today to talk about game design.

 

These  problems are not always hypothetical. Resource management is a real field of study, and even outside of business there’s plenty of tradeoffs in everyday life. Consider a classic: time vs. money. With money, we can purchase services to save time. With time, we can put additional effort into making money. Since we value both time and money, we find ourselves in an intriguing cycle.

But there is a trap. Elegant optimization problems are fun to think about, but elegance does not correlate with necessity. Elegant problems grabs our attention but may distract us from more important problems. For example, resource management problems are often much more trivial when the ultimate value of one resource is removed. What if hypothetically, we were to demote our value of money? How much does money mean? What about time? These problems risk sounding senseless initially, but depending on the person may have surprising answers.

If you find yourself working on some intricate problem, perhaps the first thing to ask is what you can demote.

What is Luck?

Category : Design, Games, Mathematics, Meta-Thinking, Rationality · No Comments · by Sep 22nd, 2015

Richard Garfield, the designer of Magic the Gathering, defines luck in his ITU Copenhagen talk as “uncertainty in outcome”. I think by modeling human reasoning as Bayesian, we can come up with another fruitful, if not a more fruitful, definition.

Suppose I were to take out a quarter from my pocket and ask you to guess my next twenty flips. You perceive the heads/tails probability to be 50/50 with high confidence, and so your accuracy on my first three flips, which turned out to be all heads, is very heavily luck based.

Now suppose that my next fourteen flips are all heads. At this point, you will be quite certain that the game was rigged, and your heads/tails probability becomes 100/0. Indeed, your next three guesses are correct. Curiously, at this point in the game we no longer perceive luck as being involved.

In Bayesian speak, our 50/50 distribution is our prior belief, and 100/0 distribution posterior belief. Note that we consider the prior (no pun intended) to be highly random, whereas the latter not so much. Wikipedia defines random as “the lack of pattern or predictability in events”, which suits our current purpose. We will roll (pun intended) with it.

Now another example.

Suppose I leave right now to catch the next bus to work. The bus arrives just as I get to the bus stop. How lucky!

Now suppose I tell you that I have memorized the bus table and had been stealing glances from the wall clock while talking to you, cutting our conversation short when it’s time to leave. The bus arrives just as I get to the bus stop. Not very lucky, but pretty calculating.

This example is insightful in two ways. Firstly, to be perceived as lucky I don’t have to intentionally make a choice. As long as the outcome benefits me, I seem lucky. Secondly, we can make things seem less luck-based via additional information. My new prior (which is my posterior after memorizing the timetable) was good enough to reduce randomness. What appears to you as random may be fairly predictable for me. Randomness can be subjective. In fact, it often is. The stock market, weather patterns, and even the search for a good romantic partner can be predictable for one and random for another. Thus, a definition of luck necessarily takes subjectivity into account.

So I think that perhaps a better definition for luck is “when apparently unpredictable outcome offers high utility”. In Bayesian speak, when an outcome of good utility occurred despite a prior that doesn’t favor it. We should note too that utility, which is subjective too, also affects our perception of luckiness. In a bet with 90% chance to win one million and 10% to loss one million, the winner is still declared pretty lucky. Our perception of utility is biased. In the case of loss-aversion, sometimes the bias is advantageous.

Prior and utility are both extremely malleable, and a variety of cool insights arise (a.k.a. this is an insight dump where I stop being good at explaining things):

  1. Extremely complex conditions gives rise to priors of similar quality to everyone. In guessing the 567,890th digit of pi, everyone starts on the same prior.
  2. In fact, if the universe is deterministic, the reason that statistics and probability exists in the first place is because events are too difficult to predict. In that case, there would be no true prior other than “X certainly will happen”. Our predictions will necessarily always be approximations and guesses (a guess that can only be exact if it also states that “X certainly will happen). If the world is deterministic, probability is a summary of what we don’t know.
  3. Depths of gameplay may arise from a sufficiently complex condition that allows continual optimization. A first card drawn from a nicely-shuffled deck is complex because shuffling is complex, but shallow because we don’t hold any information that can visibly optimize our prior. A card drawn during the middle of the game is complex not only due to shuffling but also due to a significant number of cards already in play; however, this draw is less shallow because we can use the cards in play to optimize our prior if we wish. When we are down to the last dozen or so cards, the draws are actually deeper because we now have very concrete information to optimize from.
  4. When you meet people who to you seems perpetually lucky or unlucky, you should strongly suspect your priors.
  5. Knowledge on probability, statistics is very valuable.
  6. Knowledge about how to improve your priors is extremely valuable. It’s a prior on how to change your prior when you see rules. See factors of correctness. A hidden value in Bayesian models is actually a value or a prior on confidence (the resilience your priors are to change). The phenomenon illustrated in factors of correctness might point to variability of the of underlying prior on confidence. It seems like more research should be done in that area.
  7. Disguising depth through luck allows new players an excuse to feel better. It’s a desirable feature of design.
  8. The best estimation for probability is set in stone, so our versatility (once we have a good prior) comes from selecting our situation such that the most likely outcome yields optimal utility. Something like betting on heads on a rigged coin or timing our leave for the bus.
  9. Complexity is a gradient scale but has very different design properties on different logarithmic scales.
  10. To artificially add luck to anything, create a situation where everyone has the same starting prior. (i.e. dice roll, coin flip, atmospheric pressure).
  11. For fun (not for profit), to make yourself more “lucky”, put yourself in situations where unsuspecting high utility situations may occur, without much probability of low utility situations. This might explain why people who are more curious finds more pleasant surprises.

Pure Land in Bullet Hell

Category : Art, Games · No Comments · by Sep 3rd, 2015

zen

A collection of mini-essays as a homage to Touhou scorerunners. Generalizable to shmups, and to a degree some other mediums. For the uninitiated, a taste of scorerunning can be found here.

I. Man vs. self

Shmups embody the essence of the man versus the self. In few other genres are one-coin clears so emphasized, and mistakes so obvious to the gamer. The score mechanic, a decoration in most genres, thrives in shmups. The score in no uncertain terms communicate to you the worth of your play.

Other games are like miniature gardens, shmups are like miniature mountains. Their towering difficulty challenges all that passes. Most don’t heed it and continue their way. Some do. They stop in their tracks, beholds the peak of the mountain, dream up a journey to the top, and dedicate a part of their life to it.

In 2008 Kuro, a score-runner in Perfect Cherry Blossom, posted a screenshot showing over 2500 hours of gameplay spread over more than 36000 sessions.

II. What do you want?

I had once considered scorerunning in Touhou. Not far into the attempt, I realized that in order to do what I truly wanted to do, I could not afford to seek high scores. It’s the first time I realized that I can’t want and get every lofty goal in life.

In April 2010, AM, a renowned virtuoso in the Touhou scorerunning scene, quit scorerunning to reclaim other aspects of his life.

III. Ability and wisdom

The impressive players of the game are not always wise. In 2011, the Chinese Touhou community outed score-runner MSH when it became apparent that he had cheated to obtain world records. The ousting was a public one, as MSH had cultivated many followers at the time.

The truth is, MSH is a better player than you and I. He has the talent. But talent does not correlate with conduct.

Talent, creativity, persistence, perfectionism. Which ones make you nice, and which ones don’t? In the end, their score reveals little.

IV. Ruthlessness

My old replays have a certain degree of ruthlessness to it.

When I had a goal in mind, I would ignore other conventions of “good plays”, instead abusing safespots, neglecting score, and otherwise bombing enemy patterns on sight to reduce the difficulty of the run.

Legendary player GIL was known to exaggerate his handicap for already-difficult challenges in order to produce awe-inspiring replays. In some ways I was the opposite of that.

I noticed that recently I’ve been playing shmups less ruthlessly. But I actually liked my old style.

V. Wholesome

Great score improvements in Touhou are made in billions. For the first years of a game, the high scores tend to barely pass certain thresholds: 42 billion,  52 billion, 21 billion, 10.1 billion.

This is because the pioneers has no one else to look up to, so they ask: is 30 billion achievable? What about 50 billion? They plot up a path and inevitably achieve their first milestones. New challengers seek to beat the best score, devising their own improvements on top of those milestones. They don’t ask “is 50 billion achievable?” but “how do I beat the current world-record?” Thus, world-records have a tendency to grows slower after passing certain milestones.

Eventually, when the state of art is near perfection, someone comes along and asks “what’s the theoretical maximum? This is the case when coa reached a score of 1.00002 billion in Mountain of Faith extra mode, 0.1% away from the theoretical maximum of 1.0001 billion.

VI. Creativity

A good replay is largely about execution, but sometimes creativity trumps. In this way, scorerunning is a bit like an art form.

In the demo days of Subterranean Animism, a player named UnKnown started submitting replays with innovative strategies, and consequently besting the then-highest scores by huge margins. But he did this to his own replays too, submitting new replays under alias like “U.N.None”, “#unexist”, and ” ” (blank) that introduce innovations on top of his own.

From one of his alias, “tongrentang”, we could trace him to the Chinese scorerunning community. Perhaps some top Chinese players know of his/her identity. But the alias seems to suggest that in the end, who it is doesn’t matter.

VII. Finding pure land

To score-run in Touhou is to stand between life and death. Most Touhou games have a “graze” mechanic, which rewards players for hovering near bullets (so-called graze zone). Most high-scores have tens of thousands of grazes, meaning tens of thousands of intentional almost-deaths.

The graze zone is a harsh and hostile habitat. Imperfections must be purged. The replays discretely showcase the scorerunner’s ability to survive under hostility.

But the players thrive and rejoice under such hostilities. They chose it over the leisure that most media affords, and devote thousands of hours to exploring the world and reaching for perfection. It’s a strangely haunting vision of what the world can be – a place of discipline, self-improvement, tackling limits, striving for the seemingly-possible – a pure land.

VIII. Monuments never-ending

The hundreds of replays on the Touhou high-score board and thousands made in an attempt to reach it are made permanent by the advent of the digital age. Even as a mild shmup player, I can trace the thoughts of the players as they weave through the bullets and improvise under unexpected conditions. These players are no doubt fully concentrated, and the replays capture their thoughts across brief moments in time.

But to earn that privilege to be seen, studied, acknowledged, admired, you must surpass all others before you and ascend to the top. Even then, there is no absolute security. No perfect replays has even been produced, no one has ever done everything that can be done. As long as Touhou is played. The monument of these scoreruns will inevitably be replaced by the scoreruns of the next generation of challengers.

In October 2010, Jack attained 1.002 billion in Mountain of Faith extra, breaking coa’s theoretical limit of 1.001 billion. In the comments section, he conjectures that 1.003 billion is theoretically possible.

Why is Writing for Games so Difficult?

Category : Design, Games, Writing · No Comments · by Aug 5th, 2015

link-animated

I’ve been watching walkthroughs for The Last Express. Funny that a concept I thought was innovative a year ago has in fact been done close to two decades ago.

As a game designer, I often find myself awkwardly maneuvering game elements in order to fit the story. Why is writing for stories so hard? I think the problem is interactivity.

Consider the traditional written or oral tale. In the story, not every minute gets assigned equal weight. Seconds, days, or years that are not essential gets skimmed over. This is the convention, without which both writing and reading would be unbearable. Movies embrace this convention through cuts.

In order fit a good story to a game, we naturally want to extend the above analogy to games, but interactivity works against this in three important ways:

1. Major gameplay sections usually do not support the “cut to the interesting part” structure.

2. Conventionally, gameplay sections must be engaging, which does not necessarily correlate with interest points within the story.

3. Gameplay affords the opportunity to deviate from a single storyline, often in so many ways that covering all scenarios is prohibitive in terms of cost.

In any given imaginary timeline, in order to seamlessly integrate story and gameplay, we must slice the timeline in a way that satisfies both the conventions for good story “cuts” and the conventions for good gameplay “cuts”. #2 and #3 are both difficult to account for while maintaining a good tempo for story, and #1 seems outright contradictory to the convention of a good story “cut”, unless the story or the game mechanics are somewhat special.  Writing and gameplay are not a true dichotomy, but it’s clear that extremely specific requirements must be met to optimize both simultaneously.

If the player is playing as a character in the game, then the “seamless integration” would also have to carefully avoid ludonarrative dissonance, a complex topic (and navigation) in its own right.

I think designers who write existing stories into their games are in a world of hurt, yet most games now still ship with an existing storyline. I think this is because for a long time, existing storyline is the only way of storytelling we know of.

So what are some potential solutions? The Last Express incorporates a highly interesting setting to sustain engagement throughout a stable timeline, while utilizing vast amounts of writing and technology to work around #3. Forgoing elaborate stories altogether may be a good choice (consider Threes and Angry Birds). Simple but well-crafted stories may be effective (Shadow of the Colossus) too. A sense of determinism can be built into the game to combat #3, which may also suitably evoke a sense of the tragic.

I think that increasingly, building the narrative out of player experiences rather than in-game story has become a popular choice. Games like Pokemon, League of Legends, and Journey have minimum built-in stories and rely on gameplay scenarios unique to each playthrough to generate evocative stories. One can think of this as analogous to “emergent gameplay”, expect instead we are designing for potent story scenarios – “emergent storytelling”.

Quick Summer 2015 Game Jam Postmortem

Category : Coding, Games, Reflections · No Comments · by Jul 24th, 2015

Screen Shot 2015-07-22 at 11.01.54 PM

Though UA GameDev Club’s summer jam was only 30 hours, I came out of the experience full of new insights.

What worked well:

  • The brainstorming process – I tried some free association at the very beginning and was organically led to an idea that’s both unique, technologically feasible, and has a lot of potential for quick polish. It was the most painless brainstorming session I ever had, and I suspect that two factors may be behind it: working solo, and keeping the most difficult constraints as the first ones I consider. Regarding the latter, I think recognizing the uniquely abundant affordances of using real-portraits early on was a big win.
  • Solutions bred from time constraints – the huge time constraint prompted me to make decisions that ironically benefitted the game itself. The game was originally intended to contain endless-levels, with each level a new room with new portraits. The lack of time to make animations and adjust difficulty forced me to constrain the entire game into one room with fading portraits. Narrative-wise it seems to make less sense, but I near the end of the project I knew instinctively that this approach would be better. I learnt that the logic of the game world is in fact as malleable as its graphics and sounds.
  • New tech requirements made the Jam a learning experience – Over the past two GJs I have been gradually inching out of the familiar territory of 2D games into 3D, but each time I made sure I only bite off as much as I can chew.
    The feeling of learning a lot of new skills AND finishing a nicely-designed concept is one of the most positive out there.
  • Game immersion – Perhaps by luck, I stumbled upon an idea that affords a lot of immersion. I think the idea of using memory and instilling panic as an integral part of the gameplay had somehow erased the boundary between the player and the avatar. The identity of the avatar as a powerless human character also helped covering up the fourth wall.
  • Built-in breaks – the knife-removal part of the game took longer than it should to implement, but I am happy with the break in tension and realism that it affords in the end. I was originally worried that the player would take too long of a break or find the task boring, but I think to a certain extent the human psyche is built in to continue the game at the optimal level of interest/arousal. To say simply – players make subtle attempts to stay in the flow state.

What went wrong:

  • Not really a team effort – The game was originally going to be an effort of a team of two. However, I think I was a bit intimidating initially in my ambition to create a really polished game and in my obsession over details that later on seemed relatively minor, especially since my potential teammate was much less experienced in game jams. The teammate did not return the second day, despite promising twice that he would contact me.
    I think during the jam I was struggling between knowing that I would likely construct a higher-quality game on my own and the sense that I should pick up less-experienced developers to learn about both social skills and team-management. I initially wanted to work solo but wavered during the team-forming phase. I think in later jams, I should commit to either solo or team options right away and stick to it till the end.
  • I didn’t compose the music – I think it’s better idea to not even consider composing for a GJ entry.
  • Didn’t finish the introduction and the ending – thinking back to the GJ, I don’t actually recognize any point when I could have done more to mitigate this problem without hurting the game in some other way.
  • Players didn’t have much motivation to stay – from the intro, the players know that there will be four levels and that the levels will generally be the same. For this reason, several players decided that they don’t need to see what’s beyond the first. I think that the issue lies in the game progression rather than the introduction though, and that I can probably add later hooks to keep the player engaged.

Other thoughts:

  • Negative thoughts during GJ – while I felt extremely good upon completion of the GJ, during the day of the competition my thoughts often lapsed into pessimism – “perhaps I should have picked a different idea after all”, “at this pace I will never finish”, “shouldn’t have slept for so long”.
    I began to realize that pessimism can occur in perhaps all but the most magnificent game projects, and that it’s a significant cost of working solo.
  • Don’t underestimate other players’ entries – being an extremely competitive person, I had thought that the game will likely be the best entry from the Jam. But looking at the final entries, I don’t think that this is actually the case, and was a bit ashamed for putting so much pride into the game.
    But then again, perhaps this is required to combat the pessimism outlined above.
    After all, Orson Scott Card once said something to the effect that “as you write, you should think of your writing as simultaneously the worst thing and the best thing ever written.”

In the next week or so, I will take a day to polish the game and add the features that didn’t make it on time. Stay tuned for the release!

Unity3D General Tricks and Gotchas

Category : Coding, Games · No Comments · by Jun 13th, 2015

This is a continuation of Unity 2D Tips and Tricks, some of which are adaptable to 3D as well.

Tricks:

1. Translate or Rotate relative to world space
Unity3D’s API doesn’t advertise the optional parameters Translate and Rotate functions as much as their importance merits. By default, translations and rotations are conducted in the local transform space. This is usually convenient, but can be annoying when parent/children relationships are used for their convenience in referencing each other. The optional RelativeTo parameter in Translate and Rotate takes either Space.World or Space.Self, saving us from nasty workarounds.

2. Choosing random enums
Stolen from the Unity Answers thread, a shortcut function to pick a random enum is not provided by Unity3D, but is immensely helpful for fighting technical debt while rapid prototyping.

3. Use a Tweening Engine
Tweening refers to the action of exercising fine-tuned control over the transition of variables, often done through existing mathematical functions. See an example here (automatic-audio warning).

In the context of a game, it is a invaluable time-saving tool for two reasons. Firstly, tweening adds a great amount of polish to a game in very little time; Secondly, tweening engines usually comes with timeline functionalities that dramatically simplify potentially annoying tasks.

For example, consider simulating a ball that bounces once, then stops, fades, and gets destroyed one second after fading. Using Update, we must hand-write the sine function to approximate the trajectory each frame, keeping track of the amplitude and the phase of the function. We must keep variables that track time since bouncing started, whether the ball has finished bouncing, the alpha value of the ball, and amount of time elapsed since the fading has finished. The result is ~20 lines of ugly code that is inflexible to change.

Using something like DOTWeen, we can simply do:

Sequence mySequence = DOTween.Sequence();
mySequence.Append(transform.DOMoveY(100, 0), timeToMove / 2f).SetEase(Ease.InSine);
mySequence.Append(transform.DOMoveY(0, 100), timeToMove / 2f).SetEase(Ease.OutSine);
mySequence.AppendInterval(1);
mySequence.Append(DOTween.ToAlpha(()=> renderer.color, x=> renderer.color = x, 0, 1);
mySequence.AppendCallback(Destroy);

 

Note that the alpha-tweaking line has some fancy syntax for referring to getters and setters, it will be well worth your time to learn this notation, if only for giving you more power in using the tweening engine.
Note that using a tweening engine, the code is much shorter and more readable, and it’s easier to tweak for different types of effects.

Gotchas:
1. Rounding
Unity’s Math.Round function has an odd behavior, from the API:

If the number ends in .5 so it is halfway between two integers, one of which is even and the other odd, the even number is returned.

Here is a more minimalistic round function that behaves according to expectation.

public static int Round(float value){
	return Mathf.FloorToInt(value + .5f);
}

 

2. Photoshop Color Management
Unity’s texture import feature does not account for color profile settings in photoshop, as seen here.

The difference looks something like this:
Screen Shot 2015-06-13 at 1.13.29 PM

This is a small detail that could cause massive headaches in reworking textures if not properly dealt with.
To prevent it, select “Don’t color manage this file” on all graphic assets intended for import through Unity.

3. Corrupt Prefabs via Git
As late as Unity 4.6, saving a prefab without saving the scene the prefab is in transmits a corrupt version of the prefab to the remote git server. This is one more reason to attend to the good practice of always saving the scene before a git push.

4. Viewport Space, Screen Space, and World Space
Unity’s Camera class provides a wide variety of useful functions to convert between dimension measurements on the screen, in the view port, and in the game’s world. Distinguish between these carefully:

      The Screen space provides the dimension of the game in pixels. The bottom left of the camera is (0, 0), the top right is (Camera.pixelWidth, Camera.pixelHeight), respectively the width and the height of the camera in pixels. The Z value is the distance of a particular point in the world from camera.

The Viewport space is normalized. The bottom-left of the camera is (0, 0), and the top right is (1, 1). The Z value is the distance of a particular point in the world from camera. This is useful for positioning GUI elements.

The World space is the global positioning space that Transform components inhabit.

Note that when converting between any two spaces, we are not confined within the screen or the port. It’s perfectly valid to ask the Camera to convert from (2, 2, 0) in Viewport space to world space, in this case, we will obtain a point in the world that is about one camera outside of the current camera.

When working within 2D, it’s tempting to use the dimensions of the camera to determine the positioning of GUI elements. I find viewport space to be more reliable, since it’s immutable to both camera size changes and game screen resize.

5. OnDestroy in Child Transform
OnDestroy is a convenient callback, but beware that when a child is destroyed due to it’s parent being destroyed, the OnDestroy function on the child is not called.

6. Proper way of checking whether an object has been destroyed
Due to Unity’s gameObject implementation, checking whether an object has been destroyed in not very obvious. The correct way to do so is:

if(target == null || target.Equals(null)){
	//Assume that target has been destroyed.
}

This is a handful of code to type for each destruction check, so consider writing a static function under some utility class to handle the check instead.

7. Unity Canvas Crossfade
The new canvas GUI system in Unity seems very powerful, but UI.Graphic.CrossFade and UI.Graphic.CrossFadeAlpha are broken in the sense that both will not work if the color of the Graphic element was change right before them.
The solution turned out to again be Tweens. Example code that I used:

menu.color = new Color(1f, 1f, 1f, 0f);
Color c = new Color(1f, 1f, 1f, 0.7f);
menu.DOColor(c, 3f);

This code shifts the menu’s color from completely transparent to a transparent white in 3 seconds.

Unity 2D Tips and Tricks

Category : Coding, Design, Games, Productivity · (1) Comment · by May 28th, 2015

While the 2D features in Unity3D gets a lot less love than the rest of the engine, Unity3D is still a very powerful tool for rapid prototyping in 2D given some customization. Since the 2D features leverage design decision catered toward 3D, there are occasional gotchas in 2D as well to be aware of. Here I detail some of the tricks and gotchas I noticed while working with 2D in Unity.

In general, it is a VERY GOOD IDEA to extend the given MonoBehavior class to create shortcuts for verbose but necessary instructions. The shortcuts can be implemented by way of C# properties. This reduces clutter in code and leads to less bugs. In fact, all but one of the “tricks” uses this to simplify 2D programming.

Tricks:
1. Barring very special cases, most of your objects are going to have the SpriteRenderer component. When extending MonoBehavior, you can implement a shortcut to access spriteRenderer:

private SpriteRenderer _spriteRenderer;
public SpriteRenderer spriteRenderer{
	get{
		if(_spriteRenderer == null){
			_spriteRenderer = GetComponent();
		}
		return _spriteRenderer;
	}
}

The _spriteRenderer in this code caches the spriteRenderer to skip extraneous costly GetComponent calls.
Now, monoExtendedObj.GetComponent() can essentially be replaced with monoExtendedObj.spriteRenderer.

2. Angle manipulation in 2D is usually limited to rotation on the Z axis. However, to specify Z rotation you must go through Quaternions, which are necessary for 3D rotations. We can shove the messy quaternion operations under the rug by defining a custom “angle” property:

public float angle {
	set {
		Quaternion rotation = Quaternion.identity;
		rotation.eulerAngles = new Vector3(0, 0, value);
		transform.rotation = rotation;
	}
	get {
		return transform.rotation.eulerAngles.z;
	}
}

Note that when getting a value from this property, you will always obtain a value between 0f and 360f. If this is undesirable, consider defining a private variable that stores the non-moded angle value.

3. Alpha (opaqueness) values require dealing with colors property of the spriteRenderer – an extremely verbose operation even with the existing spriteRenderer shortcut. This property lets you set transparency of the spriteRenderer easily

public float alpha {
	set {
		if(spriteRenderer != null){
			Color _color = spriteRenderer.color;
			spriteRenderer.color = new Color(_color.r, _color.g, _color.b, value);
		}
	}
	get {
		if(spriteRenderer != null){
			return spriteRenderer.color.a;
		}
		else return 0;
	}
}

An alpha value of 1 or more is fully opaque; 0 or less is fully transparent. The values are not clamped between getting and setting so that the alpha may double as a transparency counter/timer. However you might want to use a Tweener to achieve timing effects more gracefully.

Tips:
1. Most graphic bugs from your 2D game will come from drawing order or faulty z-values. Unity3D’s ordering scheme for 2D sprites is versatile but also confusing. The determining factors for sprite ordering are (from highest to lowest priority):
-Sorting Layers (Bottom ones in the Unity UI are drawn on top)
-Order in Layer (Higher orders are drawn on top, negative values are possible)
-Z value (Closer in camera line-of-sight (LoS) drawn on top)

By default, the camera is at a height of 10 with LoS pointing downward. This implies that without changing the camera, higher Z values are drawn on top. HOWEVER, the camera also has a near-clipping plane of 0.3, meaning that it will not see anything closer than 0.3 units away from the plane perpendicular to the camera LoS (using default values, any Z-value > 9.7).

Finally, note that Order in Layer is a signed short ranging from -32,768 to 32,767. Attempts to utilize numbers outside of this range will likely mess up your drawing order.

There are several takeaways from this. The first is that you should be careful with modifying default camera values unless you are certain about how camera works in 2D. The next is that while working with dynamic Order in Layer and/or Z values are convenient, you will be working with their respective constraints. In general, Z values are better for handling cases where many more layer values than 64,000 are needed (this does occasionally happen), but can cause sprite to disappear if you are not careful. Z value is also a float, so keep rounding errors in mind.

2. Unity3D’s particle and trail renderers do not have their “sorting layer” field exposed, and that’s terrible because suddenly you cannot control the drawing order of your particles and trails.
To fix this, write a custom component that has the below code and attach to your particle or trail renderer:

void Start () {
        //Change Foreground to the layer you want it to display a particle
        //system on, otherwise particles will not show up.
        GetComponent().GetComponent().sortingLayerName = "YOUR_LAYER_NAME";
}

Note that this will set your object’s drawing layer to a static layer called “YOUR_LAYER_NAME”. If this is not your wish, or you prefer to select the drawing layer in the Unity3D UI, check here to obtain a list of drawing layers and work from there.

3. As these reddit users have found, Unity3D’s 2D collision detection system will make your life very tough if you do not use physics. This is problematic for a lot of simple 2d games where physics would either be a overkill or make debugging difficult.

I have found that the 3D collision detection system has much less stringent requirements. You must still have at least one rigidbody per collision detection, but if you tightly control the z values of your objects (usually by clamping it to 0), you can get around a lot of weird issues that arises in 2D collision detection without physics.

Note that doing both this and ordering with Z value can get hairy fast. Avoid one or the other.

4. Often it is useful to know the dimensions of the sprite either at import time or after stretching, the variables below have very similar names, but only some will be correct:

SpriteRenderer.sprite.rect will specify the size of the sprite in pixels.
SpriteRenderer.sprite.bounds.size will provide the rectangle bounding box of your sprite before scaling in game units. The size of this bound is usually SpriteRenderer.sprite.rect divided by it’s “pixel per unit” under the sprite import settings.
SpriteRenderer.bounds.size will provide a rectangle bounding box of your (sprite)render as it appear in the game, which means after stretching SpriteRenderer.sprite.bounds.size through transforms.

All .size can be replaced by .extents to obtain a box with each dimension halved, this is a useful shortcut when only the size of half the bound box is required.

These are all I have regarding 2D, for now! I’m planning on starting a general Unity3D tips and tricks post soon, and most of that post will apply to 2D as well, so stay tuned!