ISO/IEC TR 23090-27:2025
(Main)Information technology — Coded representation of immersive media — Part 27: Media and architectures for render-based systems and applications
Information technology — Coded representation of immersive media — Part 27: Media and architectures for render-based systems and applications
This document provides context, motivation and use case descriptions for a set of Moving Picture Experts Group (MPEG) standards that collectively deliver media directly to render-based applications such as game engines with a renderer component, or standalone renderers. Emerging examples where such applications are especially relevant include metaverse applications and immersive displays where such displays provide an interface to renderers. This document: — describes the motivators leading to the development of new MPEG standards that facilitate the streaming of media to render-based applications; — differentiates between visual media distributed for video-based applications and visual media distributed to render-based applications; — provides an overview of a media workflow from content production to content distribution; — provides general information on relevant components of render-based systems including game engines and renderers — identifies key components and resources (compute, storage, or network) comprising a heterogeneous set of immersive displays and other render-based applications; — and documents use cases for end-to-end interoperability, including audio, video, graphics and systems aspects for render-based systems and applications.
Technologies de l'information — Représentation codée de média immersifs — Partie 27: Médias et architectures pour les systèmes et applications basés sur le rendu graphique
General Information
Standards Content (Sample)
Technical
Report
ISO/IEC TR
23090-27
First edition
Information technology — Coded
2025-06
representation of immersive media —
Part 27:
Media and architectures for render-
based systems and applications
Technologies de l'information — Représentation codée de média
immersifs —
Partie 27: Médias et architectures pour les systèmes et
applications basés sur le rendu graphique
Reference number
© ISO/IEC 2025
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
© ISO/IEC 2025 – All rights reserved
ii
Contents Page
Foreword .v
Introduction .vi
1 Scope . 1
2 Normative references . 1
3 Terms and definitions . 1
4 Overview . 4
4.1 Motivation to develop standards for distribution of scene-based media .4
4.2 Immersive displays and their information requirements .5
4.2.1 General .5
4.2.2 Raster media, 3D graphics and information requirements of immersive displays .5
4.2.3 Defining necessary and sufficient thresholds of media information .6
4.2.4 Accommodating displays with a variety of information requirements .6
4.3 Content creation not in scope for MPEG International Standards .6
4.4 Challenges for distribution of immersive media .7
4.4.1 Interoperability between renderers .7
4.4.2 Interchange between media formats .7
4.4.3 Lack of standard specifications of media format .8
5 On the usage of Immersive Technology Media Format for media interchange . 8
6 Media workflow from content production to distribution . 8
6.1 Introduction to media workflows .8
6.2 Camera captured .9
6.3 Computer generated .9
6.4 Hybrid .9
6.5 Display considerations .10
6.6 Clarifications on media workflow terminology .10
6.6.1 Mastering format .10
6.6.2 Contribution, mezzanine, or ingest format .10
6.6.3 Distribution or last-mile format .11
6.7 Observations about media formats and variety of displays .11
7 Renderers .11
7.1 Introduction to renderers . .11
7.2 Classification of rendering methods . 12
7.2.1 Real-time renderers . 12
7.2.2 Non real-time or offline renderers . 12
7.3 Families of rendering techniques for immersive applications . 12
7.3.1 Rasterisation . . 12
7.3.2 Ray tracing . 12
7.3.3 Hybrid . 13
7.4 Other rendering considerations . 13
7.4.1 Distributed and network rendering and “edge compute” . 13
7.4.2 Scene compression. 13
7.4.3 Anti-aliasing . 13
7.4.4 Deep learning . 13
7.4.5 Hardware . 13
7.5 Current state of the art .14
8 Immersive displays . 14
8.1 Characterizing the immersive display .14
8.2 Fundamental output units of displays .14
8.2.1 Points of light . 15
8.2.2 Rays of light . 15
8.2.3 Waves of light . 15
8.3 Families of displays emitting points of light . 15
© ISO/IEC 2025 – All rights reserved
iii
8.3.1 Single panel planar displays . 15
8.3.2 Multi-planar displays . 15
8.3.3 Volumetric displays . 15
8.4 Families of displays emitting rays of light .16
8.4.1 Dense light field displays .16
8.4.2 Low density ray displays.16
8.5 Families of displays emitting waves of light .16
9 Architectures and interfaces . 17
9.1 General .17
9.2 Architecture for streaming of frame-based media .17
9.3 Architecture for streaming of scene-based media .18
9.4 Hypothetical immersive display architecture . 20
9.5 Architecture for scene graph representations .21
9.6 Architecture for scene graph representations with a shared asset . 22
9.7 Architecture for annotated scene graph using IMS . 23
9.8 Architecture for translation between scene graph representations using IMS . 23
9.9 Adaptation of scene-based media to the capabilities of the client and/or to current
conditions . 25
10 Use cases, requirements and assumptions .26
10.1 General . 26
10.2 Scenario 1 Distribution to a single immersive media client (display) . 26
10.2.1 Introduction to scenario 1 . 26
10.2.2 Desired features for scenario 1 . 26
10.3 Scenario 2 Distribution to heterogeneous set of immersive media clients (displays) . 26
10.3.1 Introduction to scenario 2 . 26
10.3.2 Desired features for scenario 2 .27
10.4 Scenario 3 Distribution to both legacy and immersive media clients (displays) .27
10.4.1 Introduction to scenario 3 .27
10.4.2 Desired features for scenario 3 .27
10.5 Current assumptions . .27
Bibliography .28
© ISO/IEC 2025 – All rights reserved
iv
Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are
members of ISO or IEC participate in the development of International Standards through technical
committees established by the respective organization to deal with particular fields of technical activity.
ISO and IEC technical committees collaborate in fields of mutual interest. Other international organizations,
governmental and non-governmental, in liaison with ISO and IEC, also take part in the work.
The procedures used to develop this document and those intended for its further maintenance are described
in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the different types
of document should be noted. This document was drafted in accordance with the editorial rules of the ISO/
IEC Directives, Part 2 (see www.iso.org/directives or www.iec.ch/members_experts/refdocs).
ISO and IEC draw attention to the possibility that the implementation of this document may involve the
use of (a) patent(s). ISO and IEC take no position concerning the evidence, validity or applicability of any
claimed patent rights in respect thereof. As of the date of publication of this document, ISO and IEC had not
received notice of (a) patent(s) which may be required to implement this document. However, implementers
are cautioned that this may not represent the latest information, which may be obtained from the patent
database available at www.iso.org/patents and https://patents.iec.ch. ISO and IEC shall not be held
responsible for identifying any or all such patent rights.
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and expressions
related to conformity assessment, as well as information about ISO's adherence to the World Trade
Organization (WTO) principles in the Technical Barriers to Trade (TBT) see www.iso.org/iso/foreword.html.
In the IEC, see www.iec.ch/understanding-standards.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 29, Coding of audio, picture, multimedia and hypermedia information.
A list of all parts in the ISO/IEC 23090 series can be found on the ISO and IEC websites.
Any feedback or questions on this document should be directed to the user’s national standards
body. A complete listing of these bodies can be found at www.iso.org/members.html and
www.iec.ch/national-committees.
© ISO/IEC 2025 – All rights reserved
v
Introduction
This document reports on the current state of the art impacting support of emerging immersive displays
and other applications that generally rely on 3D-model renderers to ingest and render media according to
the specific capabilities of the display or 3D applications. The technologies in this report serve as motivation
for the development of standards to support emerging immersive displays and other applications where the
primary media format is based on 3D formats.
© ISO/IEC 2025 – All rights reserved
vi
Technical Report ISO/IEC TR 23090-27:2025(en)
Information technology — Coded representation of
immersive media —
Part 27:
Media and architectures for render-based systems and
applications
1 Scope
This document provides context, motivation and use case descriptions for a set of Moving Picture Experts
Group (MPEG) standards that collectively deliver media directly to render-based applications such as game
engines with a renderer component, or standalone renderers. Emerging examples where such applications
are especially relevant include metaverse applications and immersive displays where such displays provide
an interface to renderers.
This document:
— describes the motivators leading to the development of new MPEG standards that facilitate the streaming
of media to render-based applications;
— differentiates between visual media distributed for video-based applications and visual media
distributed to render-based applications;
— provides an overview of a media workflow from content production to content distribution;
— provides general information on relevant components of render-based systems including game engines
and renderers
— identifies key components and resources (compute, storage, or network) comprising a heterogeneous set
of immersive displays and other render-based applications;
— and documents use cases for end-to-end interoperability, including audio, video, graphics and systems
aspects for render-based systems and applications.
2 Normative references
There are no normative references in this document.
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.
ISO and IEC maintain terminology databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https:// www .iso .org/ obp
— IEC Electropedia: available at https:// www .electropedia .org/
© ISO/IEC 2025 – All rights reserved
3.1
3D content authoring media formats
3D media formats or containers of 3D media formats that are provisioned with features and capabilities to
produce visual results that exactly match the content creator’s original, typically artistic, intent
Note 1 to entry: 3D content authoring formats are sometimes used to archive the original content and are regarded as
the ground truth for how the media is intended to be presented by display devices. Such media formats are typically
large in size with respect to the amount of bytes occupied by the media in storage devices, or network resources
required to transmit such media.
3.2
3D content distribution media formats
3D media formats specifically designed to be used in distribution systems (e.g. for transmission to client
display devices)
Note 1 to entry: 3D content distribution formats can be derived directly from 3D content authoring formats where the
distribution format is a lossy or losslessly transformed representation of the authoring format.
3.3
aerial image
projected image that is floating in free space
3.4
asset
uniquely identifiable object comprising scene-based media
Note 1 to entry: Individual assets can be shared (e.g. reused) across multiple scenes.
Note 2 to entry: Each adapted version of an asset is a unique asset. An asset is uniquely identifiable with an identifier
(e.g. EIDR or URI).
3.5
attribute
metadata associated with a node used to describe a particular characteristic or feature of a node either in a
canonical or more complex form (e.g. in terms of another node)
3.6
container
serialized format to store and exchange information to represent all natural, all synthetic, or a mixture of
synthetic and natural scenes including a scene graph and all of the media resources that are required for
rendering of the scene
3.7
contribution format media
mezzanine format media
ingest format media
representation of the mastering format that is faithful to the fidelity of the original production but has
undergone some amount of lossy or lossless compression or transcoding to facilitate the distribution of the
media to a service provider
3.8
distribution format media
last-mile format media
media format that has been derived from the ingest format such that the derived format is suitable for
distribution to and consumption on the target client device
3.9
frame-based media
2D video with or without associated audio and metadata
© ISO/IEC 2025 – All rights reserved
3.10
game engine
software framework primarily designed for the development of video games that generally includes relevant
libraries for functions such as a physics engine and presentation engine for the presentation of visual, audio
and or haptic aspects of the media comprising the games
Note 1 to entry: The presentation engine is especially relevant for immersive display processing systems that leverage
the capabilities of the engine to provide visual rendering capabilities of an immersive display
3.11
hologram
three-dimensional real and aerial image emitted from a display panel surface
3.12
light field
three-dimensional volume of light in which individual rays of light are characterized by the Plenoptic
Function
3.13
mastering format
mastering format media
packaged media format consisting of binary assets, metadata, content protection and related control structures
such that the encompassed production is nearly identical to the original production of the source media
Note 1 to entry: The primary purpose of the mastering format is to preserve the highest possible image and sound
fidelity; i.e. none of the data or information in the package has undergone a process to transform the package into a
format that is “light-weight” for subsequent contribution to a service provider.
3.14
node
fundamental element of the scene graph comprised of information related to the logical or spatial or
temporal representation of visual, audio, or other sensory information
3.15
photorealistic
provision of media to carry information about a visual object such that the rendering of the object can be
performed in a way that produces an output consistent with how the object would be viewed in the natural
world under a variety of lighting conditions
3.16
plenoptic function
model that describes individual rays of light in terms of their geometric origin, geometric direction and
wavelength, at a certain point in time
3.17
presentation engine
real time rendering component or components of a game, rendering, or other software engine relevant for
creating the visual, audio and/or tactile representations of the media on behalf of the system
3.18
rasterisation
technique for 3D rendering that maps 3D representations of scenes to a 2D output buffer and is commonly
used for most real-time applications (e.g. games)
Note 1 to entry: Most modern GPUs are highly optimized for rasterisation in addition to ray tracing and deep learning.
3.19
ray tracing
technique for 3D rendering that faithfully simulates real-world optical effects and is capable of producing
photorealistic digital content
© ISO/IEC 2025 – All rights reserved
3.20
real image
collection of focus points formed by converging rays of light
3.21
renderer
(typically software-based) application or process, based on a selective mixture of disciplines related to:
acoustic physics, light physics, visual perception, audio perception, mathematics and software development,
that, given an input scene graph and asset container, emits a visual and or audio signal suitable for
presentation on a targeted device or conforming to the desired properties as specified by attributes of a
render target node in the scene graph
Note 1 to entry: For visual-based media assets, a renderer can emit a visual signal suitable for a targeted display, or
for storage as an intermediate asset (e.g. repackaged into another container i.e. used in a series of rendering processes
in a graphics pipeline); for audio-based media assets, a renderer can emit an audio signal for presentation in a multi-
channel loudspeaker and or binauralized headphones, or for repackaging into another (output) container.
3.22
scene-based media
scene graphs or assets that are or can be referenced by scene graphs
Note 1 to entry: scene-based media can be further organized into a linear series of scenes for time-based presentations,
or into a branch-based structure for interactive presentations.
3.24
shader
type of computer program that was originally used for shading (the production of appropriate levels of light,
darkness and colour within an image) but which now performs a variety of specialized functions in various
fields of computer graphics special effects or does video post-processing unrelated to shading, or even
functions unrelated to graphics at all
3.25
IMS
independent mapping space
vocabulary and corresponding metadata defined in ISO/IEC 23090-28 for the purposes of interchange across
a variety of 3D scene-based media formats
3.26
photorealism
photorealistic
ability or quality of a technology (e.g. a display) to reproduce a visual scene in a medium (e.g. a hologram)
that matches or closely matches how a person would view the same scene in the physical world
Note 1 to entry: a successful visual Turing Test has been defined as a test in which a viewer is unable to distinguish
between the scene (including the geometry and radiance of objects in a scene) created by a display from the same
scene viewed in the natural world.
4 Overview
4.1 Motivation to develop standards for distribution of scene-based media
Standards-based support of immersive media clients that are capable of consuming and presenting scene-
based media is enabled via a set of emerging MPEG International Standards that are developed to stream
1)
such scene-based media. The following list provides evidence that motivates the development of these
1) Several trade names/trademarks of products are given for the convenience of users of this document and do not
constitute an endorsement by ISO or IEC of the products named. Equivalent products may be used if they can be shown
to lead to the same results.
© ISO/IEC 2025 – All rights reserved
International Standards, i.e. to facilitate the growth of a currently nascent ecosystem of immersive services
and technologies:
— Advances in performance of GPUs, especially in their abilities to perform real-time ray tracing and neural
network computation;
— Pervasive use of the real-time rendering components of game engines by display manufacturers as a
means of ingesting volumetric media for the purpose of creating 3D visualizations;
— Increasing consumer interest in creating and delivering user-generated content;
— Increasing demand for media-oriented applications that create visceral connections with the end-user
(e.g. immersive media);
— Advances in content capture and creation technologies that can synthesize 3D models from camera-
captured imagery;
— Advances in digital content creation tools and renderers to create photorealistic output;
— Advances in network technologies that enable higher bandwidth speeds and lower latency for latency-
sensitive applications (e.g. gaming, real-time delivery of news and sports);
— Advances in display technologies toward enabling the creation of holographic images that are
indistinguishable from real-world objects;
— Interest in the emerging concept of the metaverse for which 3D assets (media objects) are anticipated to
be a significant form of media distributed across, and ingested by, metaverse applications.
4.2 Immersive displays and their information requirements
4.2.1 General
Perhaps the most important catalyst to enable an emerging ecosystem of immersive services and
technologies is the recent development of advanced display technologies capable of creating fully formed
holograms, e.g. 3D real, aerial images comprised of waveforms emitted from a display panel surface. Users
can experience the holograms without the requirement of having to wear assisting eyewear. Such displays
require input source media that are sufficiently dense in information to enable the presentation of visual
scenes comprised of objects that cannot be distinguished from the same objects when viewed in the natural
world. In other words, the visual information carried by the media needs to be sufficiently dense such that the
display is enabled (through its rendering of the media) to produce a photorealistic result. Sufficiently dense
information is further characterized by its ability to describe the complete and explicit geometry of each of
the objects in the visual scene. To achieve a photorealistic result, such sufficiently dense information needs
to include a description of each of the visual object’s surface properties, including how each object surface
responds to the presence of individual rays of light, e.g. rays of light generated in ray tracing algorithms and
procedures. This surface property information, in addition to the explicit geometry of each object, provides
essential information used by ray tracing algorithms that are offered in current GPU products to create
photorealistic light fields that can be emitted by holographic displays to create photorealistic holograms.
4.2.2 Raster media, 3D graphics and information requirements of immersive displays
Commonly used raster media formats such as those specified in ITU-R BT.601, ITU-R BT.709, ITU-R BT.2020,
ITU-R BT.2100 were not originally developed to provide photorealistic information to enable displays to
create holographic presentations. Rather, these raster media formats efficiently provide information
sufficient for displays to produce 2D visual presentations, with varying degrees of spatial resolutions, frame
rates and pixel depth. Recent developments in display technologies capable of simultaneously producing
distinct and a limited number of multiple views of their visual presentations have also demonstrated that
raster media formats can be suitable to meet the information requirements of such displays. However, the
upper limit as to the suitability of these formats in terms of the number of views and or the resolution of the
display remain unknown, but nevertheless relatively small compared to the information requirements of
some holographic displays.
© ISO/IEC 2025 – All rights reserved
Given that emerging display technologies require sufficiently dense information to create photorealistic
representations of real-world objects, such commonly used raster formats fall short in delivering the
information needed by such displays. That is, the number of unique light emitting elements from which
each display emits an individual ray of light, that is needed to enable a display to reproduce a holographic
visualisation of a physical-world scene, has been shown to be significantly large, i.e. on the order of billions
to possibly peta-pixels of resolution required.
One early example of a holographic display with an especially dense resolution is the SolidLight display
system by Light Field Lab, Inc. The SolidLight display system has a nominal pixel density of 10 billion pixels
per square meter system. Given that a corresponding raster signal could be in the 444 format at 10 bits-per-
pixel, a compression ratio of approximately 30,000:1 would be needed to achieve a compressed bit rate of
100 Mbps (Megabits per second). Not only is the compression ratio of 30,000:1 infeasible with current nor
foreseeable compression technologies, but the bitrate of 100 Mbps is also infeasible to transmit over current
nor foreseeable network technologies. Fortunately, there are alternatives to representing the amount of
information needed for such a display. One alternative is via 3D graphics formats used in digital content
creation tools and production workflows.
Recent developments in 3D graphics formats and technologies have proven to be capable of providing
sufficiently dense information to enable displays to present photorealistic results where raster formats
do not. The availability of these 3D graphics technologies, including 3D media formats, containers of these
formats and their corresponding rendering products hence provides fundamental building blocks in
enabling immersive displays to create their optimal visual presentations.
Yet another emerging and relatively nascent form of content generation is through generative methods of
artificial intelligence. However, as these methods are still developing, the focus of this report is on the use of
3D models to represent the information needed by a truly holographic display.
4.2.3 Defining necessary and sufficient thresholds of media information
An important realization about the information requirements of holographic displays that are capable
of creating photorealistic holograms, is that the requirements of such holographic displays define the
maximum threshold for information required by any display or any visualization application. That is, any
display or visualization application that is not capable of producing true-to-life, photorealistic, holographic
visualizations of 3D scenes requires less information than is needed by a display or other application that
is capable of doing so. Hence, the information requirements for holographic displays that are capable of
producing animated, photorealistic holograms, consistent with the laws of physics in the natural world,
define the amount of information that is necessary for holographic displays and sufficient for all displays.
4.2.4 Accommodating displays with a variety of information requirements
Displays and other visualization applications that do not have information requirements as rigorous as those
of holographic displays are nevertheless addressed by the same International Standards for support of such
holographic displays. That is, the distribution of media needs to be adaptable to accommodate a range of
client display technologies with information requirements that are less rigorous than those of holographic
displays. Hence the International Standards likewise need to address the functional ability to adapt the
media according to various display technologies.
4.3 Content creation not in scope for MPEG International Standards
Despite the importance of 3D graphics representations to serve as the conduits of information sufficient
for the presentation of real-world holographic visual scenes, raster media formats nevertheless can serve
an essential role in enabling the creation of such 3D graphics media to represent real-world objects. That
is, the capture of raster information about a scene comprised of one or more objects, from multiple angles
under multiple lighting conditions is a primary means of gathering information for 3D object synthesis
methods to create 3D graphics representations of real-world scenes. 3D object synthesis methods can
leverage photogrammetry or neural network algorithms to extract information about the geometry and
surface properties of real-world objects to create 3D “synthetic” representations of those same real-world
objects. Hence real-world raster information can serve an essential role for the creation of 3D graphics
representations of real-world scenes. Nevertheless, the methods and practices used to create 3D graphics
© ISO/IEC 2025 – All rights reserved
media synthesized from a dense collection of raster-based information is not in the scope of the subject
International Standards.
More specifically, the scope of the MPEG International Standards that are developed to facilitate the
emergence of a rich ecosystem of products and services capable of producing photorealistic immersive
experiences, does not address the production of 3D synthetic media from real-world imagery. Rather,
the entire content production ecosystem is assumed to have reached sufficient maturity such that the
production of 3D graphics media, whether based entirely on virtual world content production techniques,
or a combination of real-world (from real-world imagery) and virtual world content, or solely on real-world
imagery, does not form a set of requirements that are addressed by these International Standards. What
is, however, assumed to be available to facilitate the development of these International Standards is a
collection of media representations that sufficiently describe real world actors and objects such that the
media itself contains sufficient information for holographic displays to create photorealistic visualizations.
4.4 Challenges for distribution of immersive media
4.4.1 Interoperability between renderers
One common challenge that has plagued the nascent industry supporting the distribution of immersive media
is that there is no current (de jure or defacto) standard interface across the existing set of popular renderers.
Unlike standards that specify the behaviour of a video decoder in the presence of a single particular video
bitstream, it is currently not feasible to develop standards that likewise specify the behaviour of a visual
renderer when presented with a single 3D scene or asset. Such a problem can be characterized as a problem
with interoperability between renderers.
For example, given a single 3D scene described by the popular Universal Scene Description (USD) format,
hypothetical renderers A and B could produce visual rendered outputs SceneA’ and SceneB’ that do not
perfectly match, even though both renderers are presented with the same single 3D asset described by the
USD format. That is: SceneA’ and SceneB’ are not identical. This discrepancy between the behaviour and
outputs of different renderers (A and B), even when presented with a single common USD asset, presents
an enormous problem for content producers that wish to support a variety of clients that deploy different
renderer technologies.
4.4.2 Interchange between media formats
Another factor that also complicates the development of standards-based distribution of scene-based media
is that there is limited interchange between scene-based media formats. Figure 1 illustrates this problem.
On the left is a scene that is represented by the glTF 2.0 format developed by Khronos. On the right is the
same scene represented by the FBX format. As shown in Figure 1, the transparency and the colour of the
cover for the plate is lost in the FBX representation of the scene.
a) Scene represented by glTF 2.0 b) Scene represented by FBX
Figure 1 — Illustration of interchange problem between scene-based formats
© ISO/IEC 2025 – All rights reserved
4.4.3 Lack of standard specifications of media format
Yet another problem in the development of standards-based distribution of scene-based media is that for
many popular scene-based formats, there is no formal specification for the format. Some formats, such as
USD, are supported by open source libraries, but open source is dynamic in nature and difficult to reference
in the development of de jure standards.
5 On the usage of Immersive Technology Media Format for media interchange
Addressing the challenge of a lack of standard specifications for a media format that is capable of satisfying
the information requirements of a holographic display, the Immersive Digital Experiences Alliance (IDEA)
2)
provides a suite of specifications that describe the Immersive Technology Media Format (ITMF). As
characterized by IDEA, the ITMF is a suite of royalty-free specifications that establishes a baseline for
an interchange of immersive media. The format is initially intended to be used for interchange amongst
industry-standard digital content creation (DCC) tools, i.e. for the packaging and creation of 3D synthetic,
computer generated and natural media, including audio and visual media. As a baseline format primarily for
use with DCC tools, assets described by ITMF are agnostic to the specific type of device on which they can
be presented. Visual media, for example, will be display-agnostic, so that a subsequent rendering step in a
(media and application aware) distribution system can reformat the visual media to match the capabilities
of the client display. The ITMF suite contains:
— A scene graph specification
— A data encoding specification
— A container specification
The
...








Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.
Loading comments...