Pantarheon

Where everything flows. In 3D.

The Seven Masters

Adam Stanislav

19 June 2010

Let’s talk about 3D in photography, film and video. Everyone seems to know what 3D is, and everyone seems to have an opinion about it. Yet, in reality, most people use the term “3D quite casually and often incorrectly. So, perhaps not everyone really knows what they are talking about when it comes to 3D.

A Mule Is Not a Horse

A mule is not a horse, and perspective is not 3D. The mule is a very useful animal, of course. And the discovery of perspective has revolutionized art forever. But calling things something they are not only serves the marketing departments who lie to sell their products.

If editing images using perspective projection is “3D” as Sony would like us to believe with their deliberate mislabeling of what their Sony Vegas software does, then Michelangelo’s art on the ceiling of the Sistine Chapel is not a painting but a sculpture. Not that Sony is the only company that twists the facts, it just so happens I am very familiar with, and quite fond of, Sony Vegas, as I use it in my own work.

Unless a photo, a film, or a video presents a different image to each eye, then it is not, and never will be, 3D. Nor is it 2½D, as some have suggested. It is perspective. And, of course, perspective is a great thing, so why lie about it in the first place?

The Dimensions

How many dimensions are there? Just to be clear, I am only talking about spatial dimensions, not counting time as the fourth dimension for the purposes of this discussion. That clarified, as far as we humans are concerned, there are three dimensions. This is based on two major factors, our perception and our apperception.

A theoretical physicist might offer a completely different explanation. But I am not a physicist, I am a psychologist. Plus, we are not discussing physics here, we are discussing photography, film, and video.

Our perception stems from the fact we have two eyes, with the pupils some 54 - 72 mm apart (mine are 69 mm apart if you’re curious), and from the fact that both of those eyes are positioned in the front of our face, in the same “plane” as it were, both looking in the same direction.

The first fact is easy to test, place an eye patch over one eye and walk around for a week to see the difference. The second fact is a bit harder to test, but think of the way a horse sees, with its eyes being on two sides of its face, separated by its nose. Or perhaps you could just take two pictures with two cameras set back–to–back, so each would point in the opposite direction, then look at them using 3D technology. I have never tried it (the idea just came to me as I was typing this), but I suspect we would have hard time trying to combine the two images in our brains at first but might get used to it if we did it for a week or so and then we would need to re–adjust to our usual way of seeing things once we stopped the experiment.

Our apperception (the way our experience affects how we interpret what we perceive) probably stems from the shape of the human body, which offers us three main dimensions to consider.

For starters, unlike most other animals we are related to, we stand tall. That is, we stand straight on our “hind” legs. Because of that, we can easily identify the above and the below, or the up and down, the vertical dimension.

Secondly, we can spread our arms sideways. That makes it quite natural for us to think of things as being to the left and to the right, the horizontal dimension.

And since, as mentioned, our eyes point in the same direction, a direction perpendicular to both the vertical and the horizontal dimensions, it comes naturally for us to identify things as those in front of us (we see those) and those behind us (we don’t see those, which can be quite dangerous to us occasionally). Hence, the third dimension, the front and back dimension, or depth.

If our bodies were shaped very differently, our apperception would be quite different and we might think of a different number of dimensions than three. If, for example, we were shaped in the form of a sphere, we might think of π (pi, 3.14…) dimensions. Or perhaps we would think of everything as angles with infinite dimensions. I don’t know, I am not a sphere.

And if we had a different number of eyes, our perception would be different. Or if they were spaced differently (like in our horse example), our perception would be different. Or if the eyes themselves were shaped differently, like the segmented eyes of a fly, our perception would be yet different.

Thank goodness we are not a dodecahedron with twelve eyes! Just imagine the technological challenge of filming with 12 cameras and then letting each view show in a different eye. Our present–day challenges with 3D pale in comparison.

Of course, we are what we are. We perceive the way we perceive. And we apperceive as we apperceive. Any discussion of 3D for humans can safely stay within our natural bounds.

3D in Photography

We can now talk about 3D in photography. And by photography I also mean film and video. Mostly because I am too lazy to type three words where one word can cover all three. On the rare occasion where I talk about photography to the exclusion of motion pictures, I will say static photography.

Photography, as you know, is a Greek word which means writing (or drawing) with light. You have a dark chamber (camera obscura) which has a pinhole on one side, a photosensitive material, such as film or an array of electronic sensors, on the opposite side. The pinhole usually has a lens inserted to speed things up. The photosensitive material is planar. In other words, it is two–dimensional. It captures a 2D representation, with a generally perfect perspective, of the 3D reality in front of the camera (well, technically, just whatever is within its field of vision). In that it works very much like the human eye. One eye, that is.

Starting in the late 20th Century, advances in mathematics and in computers made it possible to manipulate the images digitally, or even create them from scratch without a camera (computer–generated imagery, or simply CGI). The mathematics behind it often uses 3D vectors, which is how the marketing types justify calling their products 3D. But the result is still a 2D image, which still makes those marketing types liars. That software produces perspective projection, which is 2D.

It did not take long after the invention of photography that many people attempted to overcome its 2D limitations. They used two cameras, spaced the same as the average human pupils, typically 65 mm, a little more than some, a little less than others, but a good compromise. And they came up with numerous, often quite sophisticated, techniques of showing the left view to only the left eye and the right view to only the right eye. 3D photography was born!

Now, some of you (especially those marketers I have been criticizing) might argue that even showing two separate images (or series of images in film and video) to each eye is not really 3D because, after all, you cannot walk around the scene the way you could walk around a statue or any other actual 3D object. I would like to remind those that a live theatre performance is three–dimensional. Yet in classical theatre the stage is inside a box with a large portion of one of its walls removed (theatre experts call it “the fourth wall”). You cannot walk around the stage and you are not allowed on the stage and walk around the actors. In modern theatre the box is often non–existent, but you are still not allowed to walk around during the perfomance. You sit passively in a comfy chair and watch the entire performance from that chair. Yet, no one in his right mind would argue that the stage and the actors on it are two–dimensional.

So, yes, the photography that manages to show a left view to the left eye and a right view to the right eye truly is 3D.

The Mixed Message

The producers of 3D movies often send out a mixed message. They insist they shot the movie specifically for 3D using its threediness as an essential part of the movie, so the movie would not work the same in 2D. Then they release it in 3D in those theaters that can project 3D and in 2D in the rest of the theaters. (Note: If you think I am confused for talking about theatres one moment and theaters another, I am using the American spelling convention in which a playhouse with live actors is a theatre and a movie house is a theater.) Then they release it on DVD and BD in 2D first and in 3D some time later.

Presumably they do it to milk the cash cow so more people will see it in theaters and then they buy the DVD or BD twice, first in 2D, later in 3D. This is absurd. Either the 3D is essential to the movie or it is not. No wonder many people think of 3D as a gimmick and refuse the argument that the sound was viewed as a gimmick when it first came, and so was the color, let alone panning, tilting, closeups, and all the other ways of showing something in a way other than just turning on the camera and leaving it stationary.

It is hard to argue with them because we do not see the studios releasing their movies on DVD/BD first in the silent mode and in black & white, then as black & white with the monophonic sound, later in color with the monophonic sound, then in color with stereophonic sound, and finally in color with 5.1 sound. No, they release it straight in color with 5.1 sound, often with an extra 2–channel stereophonic sound thrown in for good measure.

So, why can’t they release 3D as 3D exclusively?

The Hybrid Problem

The mixed message mentioned above is often accompanied by what I call the hybrid problem. This problem happens mostly in film and video, though static photography is not always immune to it. Specifically, the problem appears when you shoot and edit strictly for 3D and then release the left view alone or the right view alone as if it were 2D. In most cases it is not 2D and should not be treated as such.

The problem is caused by adding 3D text or 3D transitions and not creating a separate middle view in which all text and transitions are in 2D. Since there are no transitions in static photography, the hybrid problem sneaks into static photography only when 3D text is added to the picture. Otherwise, each view may generally also be seen as a standard 2D photograph. Film and video, however, almost always contain some text, and at least some of it tends to be 3D if the film or video itself is 3D.

Let me explain. Please take a look at this picture (you will need red/cyan anaglyph glasses, something anyone interested in 3D should have sitting by his computer at all times).

When viewed with proper anaglyph glasses, this picture shows the word “HELLO” three times. The three words are perfectly aligned above each other. The top word floats in front of the screen, the center word is exactly at the screen level, and the bottom word is behind the screen. It is a small picture, so if you cannot quite see where the words are relative to the screen, just position your mouse cursor between the two vertical legs of the letter H in each word. That will make their positioning quite clear, as the mouse is always on the screen itself.

Note that this is not really a good 3D image. The background is at the screen level but the bottom word is behind the screen but in front of the background. And all three words are of the exact same size, while the top word being in the front should be slightly larger than the center word, and the bottom word being behind the screen should be slightly smaller. After all, the farther away an object is, the smaller it appears.

I did that on purpose, however, as it makes it easier to see how the three words are perfectly aligned above each other.

If this picture was a frame from a 3D movie and we would like to release it in 2D using the customary way of extracting either the left view or the right view and calling it 2D, which view do you suppose we should choose?

Before you answer that question, take another look at the picture, but this time do not wear the anaglyph glasses. And remember that, as far as the letters of the three words are concerned, the left view consists of the red and white portion of the image, and the right view of the cyan and white portion (white is where they overlap). Go ahead, examine it now.

As you can see, whether you take the left view or the right view, the three words are no longer perfectly aligned above each other. To make that even clearer, I will now show you the left and the right views next to each other:

The left picture contains the left view, while the right picture shows the right view. Neither is suitable for a 2D release of the movie.

This may not be a problem with the general portion of the movie which only shows what the two cameras see. After all, real life objects are usually not perfectly aligned above each other. In many cases this may not be a problem with 3D transitions either, as transitions tend to move fast, each frame different from the one before and the one after.

But it is always a problem with text. The text has to stay on the screen long enough for us to read. And even if most viewers are too busy reading the text and watching the movie, completely caught up in the story (or so we hope ☺) to analyze it and see what is wrong, most will feel something is wrong. And we usually don’t want that (except perhaps in a parody).

What we actually want to see in 2D is this:

How do we get it?

The Genetic Solution

A fairly simple possibility is something I like to call the “genetic solution.” Not because it is hard–coded in our DNA, but because I have borrowed the idea from the two main types of genes: Dominant and recessive.

Our genes (and those of other organisms) consist of pairs of alleles. In many cases, each allele within the pair can have one of two states, on and off. If both alleles are on, we inherit whatever trait that pair controls. If both are off, we do not. But what if one is on and the other is off? Then one of them is dominant, the other recessive. And the dominant one decides whether we inherit the trait.

In the genetic solution, we make one of the views dominant, the other recessive. When watching the movie in 3D, we are not aware of which view is dominant and which recessive, they are just the usual left and right views. But when we generate a 2D version of the 3D movie, we simply copy the dominant view into it.

The point of the genetic solution is that we do not just randomly show one view to the 2D audience, we consciously and deliberately decide which view we are going to use for 2D and then design our text (and other effects) in such a way that the dominant view looks good in 2D. In our case it means we make the text perfectly aligned above each other in the dominant view and add the threediness to the recessive view.

So, if we choose the left dominant 3D approach, this is how our left and right views look:

And if we choose the right dominant 3D approach, we design these two views:

In either case above, the top image shows the left view, the bottom image the right view.

Now that we know that our 2D text will look good, the big question is what the genetic solution does to our 3D text. Below you can see both solutions, the left dominant in the left picture, the right dominant in the right picture. You need your red/cyan anaglyph glasses to see them in 3D:

You can easily verify the dominance of either picture (and of any 3D picture) by looking at it with just the one eye representing the dominant view. In other words, close the recessive eye. So, if you close your right eye while looking at the left picture above through red/cyan anaglyph glasses (in other words, you will be looking at it with your left eye), you will see only the left view. You will also see that view has the words perfectly aligned above each other. You will get the same result if you look at the right picture with your left eye closed (so you are looking with your right eye).

But if you look at our original 3D image, which did not use the genetic solution, it will not be aligned regardless of which eye you close, it will only be aligned when viewed in 3D with both eyes fully open.

So now, by creating a dominant view, we have solved the problem of seeing the words properly aligned when viewed in 2D. This solution came at a price: Now the words are not perfectly aligned when viewed in 3D. We have traded one inconvenience for another. Or have we?

As the three words appear on three different planes of view when seen in 3D, chances are we will not notice the misalignment unless we are deliberately looking for it. Sure, some nerd will always look for any “imperfections” in your movie and triumphantly list your perceived misalignment in the “errors” section of the listing of your movie on imdb.com, but no matter what you do, the keen eye of someone with no life will always find errors in your work. Even if you are a genius filmmaker. So, don’t worry about nerds.

Now we need to decide which of the two dominant versions looks better. I opine that, at least in our case, the left dominant 3D version not only looks better than the right dominant 3D version, it actually looks better than the original. While in 2D the perfect vertical alignment clearly looks better than either the left or the right view of the original, in 3D the perfect alignment looks somewhat sterile and artificial, while the slight misalignment emphasizes the threediness of the image, feels more real, and makes the words easier to read.

This is because in English—as in most languages of the world—we write from left to right and, more importantly, read from left to right. The left dominant 3D image presents the words in their “natural” order in which we are supposed to read them. I suspect that in those languages that read and write from right to left the right dominant 3D image would look better.

All in all, picking an appropriate dominant view before we even start shooting (and certainly before we start editing) and keeping that dominance in mind throughout the entire creative process makes our movies better prepared for a 2D downconversion and may even help us make better 3D movies (and in some cases better 3D static photographs).

Best of all, we can do it right now as it requires no changes to the software we use for preparing 3D pictures.

The Pirho Format

Let me propose yet another solution for the problems stemming from making both 2D and 3D versions of the same film and video (it would, however, be an overkill for static photographs). Since I am presenting it here on Pantarheon, I’ll call it the “Πρ format,” which of course is pronounced as the “Pirho format.” It gives us complete control over the appearance of the 2D and 3D versions of a movie, independent of each other, way beyond what we have discussed so far. As such, it even allows us to produce films and videos in which the 2D version shows something entirely different from what the 3D version does.

Why would we want to do that?

Because sometimes a 3D movie displays directives pertaining strictly to its threediness. For example, such a movie may display a text or a symbol directing us to put on the 3D glasses when it is switching from a 2D section of the show to its 3D section, and later another text or symbol directing us to take the glasses off when it has switched to a 2D section.

By the way I totally hate it when movies do that. Why? Because I wear prescription glasses. And I wear headphones when watching movies at home. Whenever I have to turn the glasses on, I have to pause the playback, take off my headphones, take off my prescription glasses, put on the 3D glasses, put my prescription glasses over the 3D glasses, put my headphones on, and start the playback again. I have to go through a similar ritual when I am directed to take my 3D glasses off. A pain in the neck it is! Think about that, Robert Rodriguez, before you make your next 3D movie. You’re too good a director to make the lives of your glasswearing fans miserable!

Nevertheless, such movies do exist. And in those movies it makes no sense to display any of those directives in the 2D version of the movie. With Pirho you can show the directions to the 3D viewers and not show them to the 2D viewers, or even show them something completely different, while keeping it all in the same video on the same disc or in the same file you let people download from the Internet.

To produce Pirho, we start by creating three views instead of two, left, middle, and right, where middle represents the 2D view. That does not mean you need to shoot your entire movie with three cameras (though it would not hurt ☺). It only means you need the three views for those few sections that require something different for 2D than for 3D.

Like this:

Let me show it to you again, this time with no spaces between the pictures (assuming your bowser supports CSS):

Better yet, let us look at something with the 2D different from the 3D, again left, middle, right from top to bottom:

Yes, it’s ugly. I hope you’ll never use this particular design in a real movie! Anyway, here is the 3D anaglyph, which we may show to someone watching it in 3D:

And here is what we would show to someone watching it in 2D:

The Pirho Format Specification

There are too many formats for 3D photography. Some store each view in a separate stream. Some show them in a side–by–side format with the left view in the left half of the video frame and the right view in the right half, doubling the width of the frame. Others store them in a cross–eyed view with the left view in the right half of the frame and the right view in the left half, again doubling the width.

Yet others store the two views in an above/below format with the left view in the top half of the frame and the right view in the bottom half, doubling the height of the frame. Or perhaps in the below/above format, which swaps the positions of the views.

Then there are those that use the side–by–side or cross–eyed formats but reduce the width of each view by half, preserving the width of the frame by sacrificing some of the detail. The yt3d format of YouTube is an example of that. And some formats do the same with the above/below and below/above formats, shrinking the height of each view to preserve the height of the frame.

Some even store the mirror image of one of the views.

Most of them (well, all but the separate streams) are useful display formats, that is, ways of showing the images to the viewer or the audience. But having them all as file formats adds to the confusion that 3D photography has been riddled with ever since the dawn of the computer age.

I don’t want that to happen to Pirho. I repeat, I don’t want this to happen to Pirho!

For that reason I am defining one and only one Pirho format. Any software that wants to be Pirho compatible, be it photo and video editors, codecs, players, or anything else must understand and support this format. If it does not, it must not claim Pirho compatibility.

Compatible photo and video editors must be able to create images/frames in this format and pass them onto compatible codecs for compression before storing them in a file. They also must be able to accept images/frames in this format from compatible codecs and interpret them properly.

Codecs, to be compatible, must be able to accept images/frames in this format from photo/video editors. They can then compress them in any way they want. They also must be able to later decompress whatever they had compressed into this format and pass the result on to compatible editors and players.

Players must be able to accept this format from codecs, extract the 2D and the 3D portions and display them to the user either as 2D or 3D images or videos, usually based on user preference.

Obviously, no editor, codec, or player can be forced to use this format. But if it wants to support Pirho, then it must do as listed in this format specification. And if it does not, it must not claim Pirho compatibility.

This specification is entirely mine. I hold no patents to it, nor do I have any intention of patenting it. Any software or hardware developer or company, whether commercial or non–commercial, whether closed–source or open–source, is granted permission to support the Pirho format, as long as he/she/it support it in its entirety exactly as specified herein.

Here then is the Pirho format. Each image or video frame consists of three views, left, middle (2D), and right. All three views must use the same pixel type (such as RGB, RGBA, YUV, etc). All three views must have the same dimensions. That is, all three must be of the same height and all three must be of the same width.

The three views are combined, side by side into an image or frame whose height is the same as the height of the three views and whose width is the combined width of the three views (in other words, its width is exactly three times the width of any individual view). They are combined in this order:

  1. The middle view, which represents the 2D portion of the image or the frame is stored in the left third of the combined image or frame.
  2. The left view, which represents that part of the 3D portion that is to be seen by the left eye only, is stored in the middle third of the combined image or frame.
  3. The right view, which represents that part of the 3D portion that is to be seen by the right eye only, is stored in the right third of the combined image or frame.
  4. In addition any Pirho compatible software should understand and support the seven master formats, as described below. And any Pirho compatible software for Windows should support all the functionality described in the Seven Masters API for Windows. By “should” I mean that while it is not an absolute requirement, software is strongly encouraged to support it.

In plain English, the views are stored, left to right, 2D view first, immediately followed by the left view, immediately followed by the right view. The following images illustrate it. These images have been shrunk to just one fourth of their original size. This is to, hopefully, prevent your browser from fitting this text to a window that is wider than your screen, which might force you to scroll the window to the right and back to the left just to be able to read this page. You can, however, click on any of the following images to see each of them in full size.

The Standards

While 3D photography predates the invention of film (as in motion pictures) and digital computers, very few standards exists when it comes to storing 3D images in computer files. Well, actually, that’s an understatement. The truth is that storing 3D images in computer files is a mess and a complete disaster. As of this writing, in mid–June 2010, there is no such standard!

To be precise, I am talking about a master standard, that is to say, a standard for creating and storing 3D digital masters. A digital master is a file format suitable for creating, editing and preserving digital images without any loss of quality, just as the photographer or the filmmaker produced them. The US Library of Congress has created a special web site dedicated to Digital Preservation. It is an important web site that anyone creating digital images (and any other digital documents) should visit and read through thoroughly. A similar web site exists for Digital Preservation in Europe.

Numerous formats have been created over the years for the presentation of 3D photos and videos, that is for showing them to their intended audience. I already mentioned the yt3d standard. And recently a formal standard for 3D in BD was created. Alas, none of these standards addresses the issue of creating digital masters: They are lossy and they are only concerned with the 3D aspect alone, without recording the creator’s wishes as to which of the two channels should be used for the 2D version of the film. They are also completely unconcerned with the need to create a master with three separate views, one for 2D and two more for 3D. Additionally, you need special, often proprietary, software to produce these files. Not to mention the need for specialized software to view these files.

The 3D–ist Manifesto

We need a good new standard, a standard that would cover at least the following criteria:

  • Store 2D images and clearly mark them as 2D.
  • Store 3D images and clearly mark them as 3D.
  • Store left dominant and right dominant images and clearly mark them as such.
  • Store hybrid 2D/3D images and mark them as such.
  • Store images whose dimensionality is not known and mark them as such.
  • Be flexible. That means that if you have a hybrid video, where some frames are 2D and others 3D, yet others mixed 2D/3D, you do not have to store them all in the most complex format, but can store some frames in 2D, other frames in 3D, etc.
  • Allow storing of all of the above in a non–lossy way for digital mastering, editing and preservation.
  • Allow storing of all of the above using lossy compression for speed and delivery to the intended audience on discs, Internet, etc.
  • Work transparently with existing photo/video editors.
  • Work transparently with existing photo/video players. That means it would allow players that know nothing about 3D to display 3D images and play 3D movies without even realizing they are doing it.
  • Work transparently with existing photo and video file formats. This is really implied by the above requirement to work with existing players, but I think it is worth mentioning explicitly.
  • Allow new applications to clearly communicate with other components in the video pipeline the dimensionality of their images. In other words, an editor must have a way of informing a codec clearly and unequivocally the dimensionality of the images and video frames it is asking the codec to compress. Similarly, codecs must have a way of passing the same information on to players, etc.
  • Allow existing applications to add support for the new standard in an easy way, without the need to re–write the application from scratch.
  • Be simple to implement.
  • Be independent of special interests, so you do not have to join an association or any closed group to be able, or even allowed, to implement it.
  • Be free, so you do not have to buy expensive publications with its specification, sign non–disclosure agreements, or pay licensing fees or royalties.
  • Be democratic. That means that it must be possible for someone to write an editor, someone else to write a codec, and someone else to write a player. Many someones, actually. So, different users have a choice among many editors, many codecs, many players, and any combination thereof must be able to smoothly work together as different parts of the same solution.
  • Be extensible. That means that if a need for a different format arises in the future, it can be simply added to the standard.

Is any of this possible? You bet it is! It took me over a week to write this document. And it took me years of thinking about it before it all started fitting together. I would not have wasted my time writing all of this if it was not possible. And I would not be wasting your time by having you read all this if it was not possible.

The Seven Masters

There are many possible ways to store a master 3D image. All of them reasonable, all of them good, many even jellicle. In this section I will describe seven such masters. Between them they are capable of storing any 3D image currently in existence, with the exception of the multi–view images used for lenticular pictures. But lenticulars are a separate topic from the usual 3D, and in particular have nothing to do with 3D video. And since the standard is extensible, it is certainly possible to add a lenticular master to it if need be.

The Seven Masters fulfill all requirements of the 3D-ist Manifesto, including the need for the different components to communicate to each other just which Master they are using. This is accomplished by assigning each Master a UUID. If you don’t know what a UUID is, you can read this wikipedia article. It is, however, not necessary for a user of editing and playing software to understand anything about the UUIDs, though a computer programmer who wants to implement the Seven Masters in his software certainly needs to know about the UUIDs. And he probably does already. ☺

  1. The First Master is General Bitmap. He does not deserve the rank of General and should really be called Private Knownothing. This is the most common type of bitmap in existence. All we usually know about it is that it is a bitmap. But we do not know whether the image it stores is 2D, 3D, or both. A human can often tell just by looking at it whether it is 2D or 3D. But even if that human can tell it is 3D, it may be very difficult to tell what kind of 3D it is. Is it side by side? Is it cross–eyed? Hard to tell. Even if some people have trained themselves to view cross–eyed images and they can tell, most people simply cannot.

    This type of bitmap should be avoided in 3D work. But we still may have to deal with it, store it in a file, read it from a file, display it. All in the hope that someday someone will figure out what its dimensionality is.

    Its UUID is 4d677271-7495-11df-81d0-0013d3c8dde1, labeled as PANTARHEON_7M_GeneralBitmap. The compatible editor should pass this UUID to the codec, when it has no information about the bitmap(s) it is editing. Before it does, though, it does not hurt to ask the human user to identify the type.

    Please note that the codec must not mark a bitmap as Master type 1 if the application does not tell it to. After all, legacy applications that know nothing about the Seven Masters will not inform the codec about the bitmap’s dimensionality. It is not the job of the codec to decide what it is. Its only job is to store the information it has received from the application. If, however, it feels the need to always store some information about it, it may just save “Type 0” and it may pass the UUID of 4d677270-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_7M_Null, to any application that is asking for a UUID. Such an application, however, may feel free to ignore that UUID as spurious.

  2. The Second Master is a Known 2D Bitmap. It is not the same as the First Master, where we simply do not know. With the Second Master we know it is a 2D bitmap. That is a big difference.

    Its UUID is 4d677272-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_7M_Known2DBitmap. It can be visually represented like this:

    Please note the image says “2D Bitmap / Frame” and not “2D View.” This distinction is quite important. The Second Master is not a single 2D view taken from a hybrid 2D/3D image. It is a pure 2D image or a 2D frame of a video, a complete image or frame. There is no left or right view associated with this image. And if you want to show it to a 3D audience, you need to show the image in its entirety through both eyes, like this:

  3. The Third Master is a General 3D Bitmap. We know it is in 3D, we also know that the left half of the bitmap shows the left view and the right half represents the right view. But we do not know whether it is a true 3D bitmap or a left dominant 3D bitmap or a right dominant one.

    By the way, if it is a 3D bitmap but is not in the left–to–right side–by–side order, then it does not qualify as the Third Master. The application must convert it to proper format. And if it does not know how, then it must treat it is a First Master.

    Its UUID is 4d677273-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_7M_General3DBitmap. It can be visually represented like this (click on it for full size):

  4. The Fourth Master is a True 3D Bitmap. It has a left view and it has a right view. It does not have a 2D view, presumably because its creator (such as a movie director) does not ever wish for it to be seen in 2D.

    Its UUID is 4d677274-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_7M_True3DBitmap. It can be visually represented like this (click on it for full size), same as the Third Master, from which it differs in that we know it has no 2D portion:

    If an application asks a codec for a 2D view from a True 3D Bitmap, the codec must inform the application that no such view is available and fail the request. The same holds true if an application requests a 2D view from a Third Master.

    Naturally, requesting just the left view or just the right view from a Third or Fourth Master is valid and the codec must comply providing it supports the Six Aspects described below.

  5. The Fifth Master is a Left Dominant Bitmap. As described earlier, it contains a separate left view and a separate right view, and it has been so optimized that the 2D view is the same as the left view.

    Its UUID is 4d677275-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_7M_LeftDominantBitmap. It can be visually represented like this (click on it for full size):

  6. The Sixth Master is a Right Dominant Bitmap. As described earlier, it contains a separate left view and a separate right view, and it has been so optimized that the 2D view is the same as the right view.

    Its UUID is 4d677276-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_7M_RightDominantBitmap. It can be visually represented like this (click on it for full size):

  7. The Seventh Master is a Pirho Bitmap, that is, a bitmap that contains a separate entry for the 2D view, a separate one for the left view and a separate one for the right view.

    Its UUID is 4d677277-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_7M_PirhoBitmap. It can be visually represented like this (click on it for full size):

As mentioned, these Seven Masters can store any kind of 2D, 3D, and hybrid 2D/3D image. Because of that, they comprise the only seven input formats needed. By input I mean those formats that editors should use to save dimensional images in. That is, they are the only input compatible codecs need to accept.

So, what if an editor has processed, say, an above/below image, like this?

Then, to be compatible, the editor must convert it to the Fourth (or possibly Fifth or Sixth) Master before sending it out to the codec. It is very easy to do and it keeps the whole system simple, so codecs, which tend to outnumber the applications, do not need to support every format imaginable. And, it is the job of the editor to accept and process the widest number of possible formats. All codecs are supposed to do is encode and decode images to a format suitable for the storage in a file container. Besides, when the editor wants to support something new and exotic, does it really want to rely on its user having a suitable codec installed?

Please note that nowhere in the Seven Masters section have I mentioned different pixel types, such as RGB, RGBA, or YUV (well, nowhere until now ☺). That is because the Seven Masters have nothing to do with pixel types. They only rule how the views are arranged within a bitmap, but the bitmap can use any pixel type the applications and the codecs wish to support.

Formats & Subformats

Some of the Master formats are subformats of other Masters. That means they could be expanded into other formats.

First of all, the Second Master is a subformat of the Fourth, Fifth, Sixth and Seventh Masters. The Second Master, as you recall, is a Known 2D Bitmap:

It can be expanded to the Fourth Master (True 3D Bitmap), the Fifth Master (Left Dominant Bitmap) or the Sixth Master (Right Dominant Bitmap) like this:

And it can be expanded to the Seventh Master (Pirho Bitmap) like this:

The Fifth Master (Left Dominant Bitmap) can be expanded to the Seventh Master (Pirho Bitmap) like this:

And the Sixth Master (Right Dominant Bitmap) can be expanded to the Seventh Master (Pirho Bitmap) like this:

Does that make the Second, Fifth and Sixth Masters redundant? Yes. Does it make them lesser Masters? No. They are very useful. They are smaller and take up less space when stored in a file, and require less memory to pass it between an application and a codec. It is quite possible that some codecs may be checking the Fourth, Fifth, Sixth and Seventh Masters to see if they could be reduced to the smaller Masters, but they are in no way required to do so.

Once again, the purpose of a codec is to encode and decode image data. And while most codecs compress the data (and decompress later), not all do. But it is not their job to figure out something the editor should be doing. And if they do, they may become slower unnecessarily, given that the editor has probably done it already.

So, if you write an editor, you should always find the best Master for each video frame. It does not have to be the same Master for every frame.

The Six Aspects

There are six additional formats we should talk about, the Six Aspects. Unlike the Seven Masters, the Six Aspects do not represent an image in its entirety and as such must be seen as incomplete. They contain one or two views of a 3D image. They can also be used with 2D images but are less useful with them since all views of a 2D image are identical.

Additionally, the Six Aspects are not mandatory. Compatible codecs do not have to support them and still can claim the Seven Masters compatibility. So, applications should be prepared to extract any of the Six Aspects from a full bitmap or frame on their own.

That said, if it is not completely unfeasible for a codec to support them, it should because that usually speeds decoding up and it reduces the size of computer memory needed for decoding. Whether the encoder part of the codec should support them is a bit controversial. It produces the danger of saving one view of a stereoscopic image in a file and not saving the other. Or the possibility of saving each view in a separate file and later losing one of the files.

On the other hand, there is always the possibility that a video editor which knows nothing about 3D, let alone the Seven Masters has the ability of performing a specialized task that our 3D editor cannot handle. In that case it is useful to save each view in a separate file and open each of them as a separate track of that 2D editor.

Anyway, here are the Six Aspects:

  1. The First Aspect contains the Left View of a bitmap. Note that 2D bitmaps have an implicit left view, which is the bitmap itself. And, of course, 3D bitmaps have an explicit left view.

    Its UUID is 4d677278-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_6A_LeftView. It can be visually represented like this:

  2. The Second Aspect contains the Right View of a bitmap. Again, 2D bitmaps have an implicit right view, which is the bitmap itself. And 3D bitmaps have an explicit right view.

    Its UUID is 4d677279-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_6A_RightView. It can be visually represented like this:

  3. The Third Aspect contains the 2D View of a bitmap. Do not confuse it with the Second Master, which is a 2D bitmap. The Third Aspect is just the 2D view of a hybrid 2D/3D bitmap. As before, though, 2D bitmaps have an implicit 2D view, which is the bitmap itself. And pure 3D bitmaps (Third and Fourth Masters) have no 2D view.

    Its UUID is 4d67727a-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_6A_2DView. It can be visually represented like this:

  4. The Fourth Aspect contains the Left and Right Views of a 3D bitmap. Do not confuse it with the Fourth Master, which is a 3D bitmap. The Fourth Aspect is just the 3D view of a hybrid 2D/3D bitmap.

    Its UUID is 4d67727b-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_6A_LeftRightView. It can be visually represented like this:

  5. The Fifth Aspect contains the 2D and Left Views of a hybrid 2D/3D bitmap.

    Its UUID is 4d67727c-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_6A_2DLeftView. It can be visually represented like this:

  6. And finally, the Sixth Aspect contains the 2D and Right Views of a hybrid 2D/3D bitmap.

    Its UUID is 4d67727d-7495-11df-81d0-0013d3c8dde1, labeled PANTARHEON_6A_2DRightView. It can be visually represented like this:

Mixing Formats

So far we have only discussed the fomats of individual bitmaps (static photographs) or video frames (individual frames of a video). But a video consists of many individual frames. For example, a 90 minute video shot at 24 frames per second contains almost 8 million frames. If the entire video is shot in, say, the Left Dominant format, then every single frame will be stored as a Fifth Master frame. But what if a portion of the video is in 2D? While we certainly could expand every single 2D frame to the Fifth Master, that may result in wasting disc space since codecs are not required to examine every frame and see if it can be stored in a smaller format. We need the ability to mix different formats in a video.

If we think of a video as a stream of frames, we can choose one common format that any individual frame can be converted to, a least common denominator, to borrow a term from mathematics. We will refer to it as the stream format, as opposed to the frame format. Once we have a suitable stream format, any individual frame can be in that format or in any format that can be converted to the stream format.

Ideally, we will choose the stream format in pre–production, before we even start shooting. That way we can make sure that every single frame of our video can be used in our stream. If we wait till production or post–production, we may find out too late that our choices are limited and we either have to re–shoot some of the work or be stuck with fewer choices.

When choosing the right stream format, we should consider the following:

  • A 2D frame can be converted to any other format by just showing the same image to both eyes.

  • The Fourth Master (True 3D Bitmap) cannot be converted to anything else because it has no 2D view associated with it. So, we can only use it in a video whose stream format is the Fourth Master.

  • The Third Master (General 3D Bitmap) has a similar limitation, not because it could not be converted to any other format but because we simply do not know whether it could.

  • The Fifth and Sixth Masters (Left and Right Dominant Bitmaps) cannot be converted to each other but they can be converted to other formats, most notably the Seventh Master.

So, the main consideration is that a True 3D Bitmap can only be used in videos that are never meant to be viewed in 2D. The second main consideration is that if you mix Left Dominant Bitmaps with Right Dominant Bitmaps, you have to choose the Pirho Bitmap as your stream format even if perhaps none of your individual frames is in the Pirho format.

The stream format is more than a decision to make. This format has to be stored in the video file. If it was not, you might get a surprise. In one case, certainly: If a video starts in 2D and you are watching it in 2D, and then some True 3D frames come up, suddenly you can no longer watch the video in 2D and you have a problem. But if the codec knows the stream format is incompatible with 2D viewing, the video will not play in 2D at all, so you get no surprises in the middle of the movie.

The question is, where in the file the stream format should be stored. An obvious choice would be the file header, the section of the file that describes global properties of the video. Alas, that would require a creation of YAVFF, Yet Another Video File Format, which is not a good idea. For one, we already have plenty of video file formats, many of which offer the same functionality as other video file formats. More, importantly, creating YAVFF would mean that most existing video software would not know how to write to it and how to read it. Not good!

A much better solution is for the codecs to store that information somewhere within each frame. They already have to store the frame format in each frame, so also storing the stream format adds no overhead worth worrying about. A single byte is large enough to store both, the stream format and the frame format. Compare that with the about eight megabytes in an uncompressed RGBA frame of a 1080p video and you will see that a mere extra byte does not affect the size of the video in any significant way.

And yes, I said “somewhere within each frame.” Why each, why not just the first frame? Because there is no guarantee that during playback the codec will ever see the first frame. Indeed, there is no guarantee it will ever see any individual frame. That is because people are free to start watching the video from anywhere, start, middle, close to the end. Anywhere! The only way to guarantee the codec will see this information is if it is stored in every single frame within the video stream.

Frame Width

So now, we can mix 2D frames with 3D frames and the Pirho frames. As our illustrations suggest, the 3D frame is twice as wide as the 2D frame, and the Pirho frame three times as wide as the 2D frame. Will this not confuse the players?

Consider this. If a grayscale image, with one byte per pixel, is replaced with a full–color image, with an alpha channel to boot, it now uses four bytes per pixel, does that mean the image is now four times as wide? Of course not. The width of an image does not change just because the number of channels per pixel has increased. The only criterion to determine the width of an image is the number of pixels in each line.

In exactly the same way, the number of pixels in an image does not increase because the image is stereoscopic. The depth of the image changes, though only as far as our perception is concerned. But its width remains the same.

That presents us with a challenge. We have gone out of our way to ensure our formats are compatible with legacy software, that is software that does not know or care about the Seven Masters. To deal with this challenge, we need to discuss the three parts of editing and displaying videos:

  1. The first part is the source of the image(s). This can be a camera that captures the images. It can also be some CGI software that generates images in a computer or a graphics device. And it can be an NLE (non–linear editor) or any other software or hardware that modifies the images. Last but certainly not least, it can be a file, one of those YAVFFs mentioned earlier, in which an existing stream of images has been saved for future use.

  2. The second part of the system is the sink, which consumes the images in some way. Typically, it will display them on a computer or video monitor or a TV set. Or it can save them to a file. An NLE can also function as a sink when it is loading videos from a file or capturing them from a device (just to become a source later on when it is done editing and needs to save them in a file).

  3. Images exist in numerous formats, so the source and the sink generally cannot understand each other. For that reason a third part of the system is needed. It is the codec, which is some software or hardware that converts the images coming from the source to images the sink can understand. The original reason for introducing codecs was to allow image compression. Raw images tend to be huge, and when you have millions of video frames, you might easily fill up an entire hard disk with its data (90 minutes of uncompressed 1080p RGB video at 24 fps with 8 bits per channel would need about 800 gigabytes of storage, at least twice as much in 3D). Generally, images contain a lot of redundant data, so they can be compressed to a smaller size.

    The compression can be lossy, which means some of the information is thrown away completely and irrevocably. In that case you can never recover the original image from the compressed data, but, presumably, you get an image that the average human cannot tell from the original. Lossy compression is used for delivering videos to the final audience. An example of that is the MPEG compression. The advantage of lossy compression is that the data can be made much smaller and as such require much less storage space.

    The compression can also be lossless, which allows you to decompress the exact original from the data. Since nothing is thrown away, lossless compression generally produces larger data than the lossy compression but still smaller than the raw data of the original image. It is useful for storing your masters, so you can edit them with finesse.

    For this reason, codecs usually consist of two major components. The first is the encoder, which accepts a raw image from a source, such as the camera or an editor, compresses it and passes it on to a sink, which then writes the compressed data to a file.

    The second is the decoder, which accepts the compressed data from a source, such as file reader, decompresses it into a raw image and passes it on to a sink, such as a video player or an editor.

    Some codecs can do other things, but encoding and decoding are the two functions important for our discussion.

    Additionally, most codecs contain a dialog that allows you to configure them before the encoding starts. The editor that wants to save a video usually allows you to open that dialog before the encoding starts. This, too, will allow us to use our Seven Masters even with editors that do not know about the Seven Masters.

    Sadly, most codecs do not have a similar dialog which could be used before decoding. This is because decoders were not originally created with 3D in mind. Their creators simply assumed that a decoder would always decode the images to the same format. That does not mean that a codec cannot offer a dialog for decoding options, but you usually have to call it from some separate configuration program before you even ask your player to show you the video the compatible codec has to decode.

    As far as implementing the Seven Masters, the codec is the most important part of the system. With a proper codec, we may be able to store our images in the Seven Master formats even if the source knows nothing about the formats. And we may be able to view the video in 3D even if the player does not understand it.

    But without a compatible codec the source cannot store images in the Seven Master formats, nor can the player get enough information to play the video properly.

Armed with that knowledge, we can now return to discussing the frame width. We have determined already that just because an image is in 3D, its width is the same as if it was not. So, if we have shot a video with two or three cameras, each set to 1080p, each image is 1920x1080 pixels, 1920 being its width. And when combined into a 3D format, any of the Seven Masters, its width remains specified as 1920 pixels.

In an ideal world, our source, our codec, and our sink are all Seven Masters compatible. In that case, regardless of the format, the source will inform the codec the width of each and every frame is 1920. It will also inform it what format the frame is in. The codec will compress it to the best of its ability, inform the sink the frame is 1920 pixels wide and pass the compressed data to the sink. The source will figure out the correct stream format and will inform the codec what that format is even before it starts sending any frames to the codec (just how that happens is discussed in the API, as I do not want to get too technical here, so any 3D photographer and filmmaker can read this without worrying about computer talk). The source will also determine the proper frame format for each and every frame and let the codec know. That way, the codec does not have to analyze each frame, which would slow it down. The codec will then pass all this information on to the sink along with the compressed data. Actually, this will work even if a file–writing sink knows nothing about the Seven Masters because the same information is stored in each frame. As long as the file format uses codecs for compression, it does not need to know the details. And, of course, whether the source sends in a 2D frame, one of the 3D frames, or a Pirho frame, it will always say its width is 1920 (in the case we are using as our example), and the codec will inform the sink the width is 1920 as well. The codec will understand the actual number of bytes to compress for each format. I know I am repeating myself but it is very important to do it right, or else different parts of the system will not work smoothly together.

In the opposite case, where the codec works as a decoder, the source will be a file reader. It may or may not know about the Seven Masters. That does not really matter, though if it does know, it may speed everything up by a millisecond or so. The codec will determine the stream format and pass it on to the sink. The sink may request the individual frames be sent to it in any of the Seven Masters or Six Aspects that can be extracted from the stream format. The codec may comply with that request. For example, if the sink is a video player and the user asked it to show the video in 2D, the sink may ask to receive everything in the Third Aspect. Or if the user wants to watch it in 3D, the sink may request the Fourth Aspect. If the sink is a NLE, it may request some other aspect, or it may just ask to stick with the stream format. The codec is not required to honor the request, though, and may just stick with whatever format each frame is in. But if it does honor the request, less memory and less processing is required, so codecs should respect the preferences of the sink. In any case, regardless of what format each frame is, the codec will always tell the sink that the frame width is the same as the width of a 2D view, in our example 1920 pixels.

Of course we do not always live in an ideal world, so we need to deal with the situation when only some parts of the system are Seven Masters compatible, as well as with the case when no part of the system is. That is indeed the case as I am typing this because no compatible codec, let alone editor or player, exists yet. I am working on a codec but have not created it yet, as I wanted to write all of this down, so I can refer to it. It is amazing how much writing your ideas down helps you implement them!

So, how do we create a Seven Masters compatible video with just the legacy software? We edit the video in 3D using the Bororo 3D plug-in for Sony Vegas or the Pantarheon 3D AviSynth Toolbox, or any other 3D software. If it can all be expressed in a side–by–side left/right manner, we assemble a left/right video in our editor using double the original width. Or if we need to treat the 2D sections separately from the 3D sections, we assemble it all (yes, every frame) in the Pirho format using triple the original width. And we just save it to a file that way. Most existing 3D players can handle the left/right format already. And the upcoming version 2.0 of the Pantarheon 3D AviSynth toolbox (which I will start working on as soon as I finish this document, which is on the date seen here) will let you separate the 2D and 3D views from the Pirho format, so, again, you will be able to watch your video in existing 3D players.

If all you have is a compatible codec but a legacy editor, again, use the editor to prepare a suitable 3D or Pirho format and save it to a file using the codec. Most codecs allow you to configure them before encoding. So, tell your codec what format the source images are in. And, while codecs are not normally expected to analyze each frame individually, this time ask the codec to examine each and every frame and save it in the appropriate frame format. That will take a little longer but you will end up with a smaller disk file.

If, on the other end, you have a compatible editor but only legacy codecs, then the editor should determine the stream format (as it would with a compatible codec anyway), convert all frames to that format and save them to file using the double or triple width respectively.

Last but not least, if you have a compatible editor and a compatible codec but only a legacy player, let the editor save it all in a compatible format, using the width of 1920 or whatever width the individual views are in. Then let the codec deal with the player. Presumably, the codec will come with some utility that lets you configure what it should do when confronted by a non–compatible player: It could, for example, just extract the 2D view and send it to the player. Or it could extract the 3D view, convert it to anaglyph and send that to the player. Suddenly your 2D player can play 3D movies without even knowing about it!

Implementation

Implementing all of this is fairly straightforward. Without getting too technical here, this is the overview of how different parts of the system can communicate their intent.

When the source wants the codec to compress 3D video frames, it asks the codec if it is Seven Masters compatible. If the codec says no (and, yes, even existing codecs that know nothing about this will say no, thanks to the way codecs work), the source will either have to look for another codec or proceed as described above where I mentioned how compatible sources can work with incompatible codecs.

If, however, the codec says yes, the source will ask if the codec supports the desired stream format and, if so, it will start sending the frames to the codec, telling the codec the frame format of each.

The sink, which in this case would probably be a file writer, can also ask the codec if it is compatible. And if so, it may ask the codec for the stream format and store it in the file header, providing the file format allows it (some file formats are quite extensible that way).

In the opposite scenario, where the codec is decoding data from a file, the source (a file reader) may ask the codec if it is compatible (which is likely since the codec has created the compressed data in the first place). If so, the source will tell it the stream format. Even if the file–reading source does not have this information or is not compatible, the codec can determine the stream format and the frame formats of each frame from the frames themselves.

More importantly, the sink, such as an NLE or a player, will ask the codec if it is compatible. If so, it will let the codec know how the codec can inform the sink of the stream format. It can also tell it what kind of stream format it would like to receive. This can be any format compatible with the actual stream format. For example, if the stream format is the Pirho Format, a player may request to receive a Left / Right view, if the user wants to watch the video in 3D, or a 2D view, if the user wants to see it in 2D. If the codec supports the desired format, it will decode the data into it. But it is not required to do so, it may just use whatever format is stored in the file, so the players should be ready to convert between different formats on their own.

If the sink is an incompatible player, so it does not ask the codec about its Seven Masters compatibility, then the codec will proceed as described above for dealing with incompatible players.

The technical details for programmers wishing to implement Seven Masters in their software are published in the Seven Masters API. If you are a programmer, please read them. If you are not, tell your favorite software developer to read them and ask him to implement them.

NOTE: The API is not written yet. I wanted to write this specification first. I expect to have it done within a month or two, though with my 30 years of diabetes sometime robbing me of all energy for weeks at a time, I cannot be sure of an exact time table. This note will self–destruct once the API is published.

Extensibility

Earlier I mentioned the Seven Masters standard was extensible. It is, indeed, quite simple to extend. All we need to do is add a new UUID or a set of UUIDs. Here, for example, I add some for the use with lenticular photography, which uses cameras with a number of lenses, typically 3 or 5, to take a number of views. These are combined using specialized technology into prints that show the picture in 3D without the use of special glasses.

The distance between two lenses of a lenticular camera is smaller than the distance we traditionally associate with 3D photography, and is highly dependent on the number of lenses (views) used. We may be able to choose two of the views for traditional 3D, as long as we know the distance between the lenses. And we may be able to choose a middle image, that is, a 2D view.

All of that means one UUID cannot cover all possible lenticular images. We need a group of UUIDs from which we can tell the following:

  • The number of views, i.e., how many separate lenses the camera used.

  • Which, if any, of the views is the left view of traditional 3D.

  • Which, if any, of the views is the middle, that is 2D, view.

  • Which, if any, of the views is the right view of traditional 3D.

We can use the group of UUIDs described as 4c4dVXYZ-7495-11df-81d0-0013d3c8dde1, where V is a hexadecimal digit representing the total number of views. Obviously, it makes no sense for it to be 0 or 1, and probably not 2. So it will be something between 3 and 15 (f in hexadecimal).

X represents the left view. If it is 0, no left view is defined. Otherwise it is the number of the view which is left, counting from left as 1 to right as V (again, it makes no sense for it to be greater than V).

Y represents the middle view. If it is 0, no middle view is defined. Otherwise it is the number of the view which is middle, counting from left as 1 to right as V.

And Z represents the right view. If it is 0, no right view is defined. Otherwise it is the number of the view which is right, counting from left as 1 to right as V. And of course, if a left view is defined, the right view needs to be defined as well. And if a right view is defined, the left view needs to be defined, too.

For example, the UUID 4c4d7246-7495-11df-81d0-0013d3c8dde1 defines a lenticular image with seven views. The second view from the left is the left view, the fourth view from the left is the 2D view, and the sixth view from the left (which is the second view from the right) describes the right view. And 4c4d4000-7495-11df-81d0-0013d3c8dde1 defines a lenticular image with four views, none of which is suitable for either the traditional 3D or 2D.

Note that the software that supports the Seven Masters does not have to support the lenticular masters, or any other extensions. And vice versa.

Copyright © 2010 G. Adam Stanislav.
All rights reserved.