change log

and it continues ...

A lot of bug-fixing!

Re-worked water supply system:

  • source and drain concept
  • computation of volume flow and remaining water in tanks
  • full computation of friction in tube system

new device: spreader

spreaderThe spreader is used to rescue people from cars they are trapped in.

The device is operated by a two button control device. It extends the “consumer device class” (it can be connected to a hydraulic source class device via the appropriate tubes).


Click the image to see a demo video on how to operate the device in a VR environment.

Future plans might cover simple deformation computation.

Aerial view and editor option

Scenery designer option available, objects can now be place interactively state can be exported as scenery file.

airialThe scene can be observed by an aerial view.

added car crash model with injured person and opened airbag

In preparation for the Long Night of Science in Dresden (June, 16th): added a car crash model.screen_shot_0_0

The car is rigged (airbag, seat-belt and doors are operational).

added new devices: multi-flow nozzle (Hohlstrahlrohr), divider

New multi flow nozzle with handle, flow and form controls. Handles are defined in geometry and detected by loader automatically. Added “divider”.


grasping and release of objects and person

Changed from seperate hand model to skeleton based hand model, objects and person can be grasped and moved by trainee.

“Real” rescue operation possible.

API documentation on-line

The API documentation can now be browsed.

added support for space mouse

full water supply system

The water supply system consists of sources, pumps, tubes, distributors and drains (nozzle).

Loss of pressure is computed (respecting loss by elevation as well as loss by friction).

Tubes are rendered as 3D C2 splines.


changed UDP to TCP communication

Changing from UDP to TCP is less robust against network-failures but solves a lot of problems with lost and out of sync packages.

news sensor: kinect 360 (PC) control unit (step and gesture control)

Using fully rigged avatars principal full body motion control via kinect camera is possible now.

The kinect sensor can replace the “old” stepping sensors.

logging of events (radio traffic, actions of trainee, ...) using PostgreSQL

PTT-gadget to control radio communication


  • all objects within defined range do “listen” to trainee (get text detected by the speech recognition as message)
  • if the PTT (radio) is used objects further away get message only, if they are assigned to the same “radio group” as the sender

integrated julius off-line speech recognition, oral interaction with autonomous person (speech output via espeak)

changed sound-system to osgAudio, location dependent sounds, multiple sounds per object, controlled by script-engine

new object: pumps and water supply system

"free floating" smoke at the fire source, respects virtual source, smoke generator at 0.75 * flameheight, influenced by wind


zone-model for computation of thermal conditions and pressurization, as well as gas-flow at room-borders

Using a simple zone model it is now possible to simulate smoke flow effects which can be expierenced when – e.g. – open a door.


render engine optimization, doors as dynamic object

The geometry may now contain “render volume” hints, object in these volumes are only rendered, if camera is inside.

Doors can be interacted with (opening/closing).

Collision detection works for dynamic objects too.


smoke-detectors, signal lines, control-panel (BMZ, signal display, accoustic alarm, silencing, resetting), call-back (mouse and 3D tracker) → generic "signal" method, global states (time of day)

Implemented a generic signal-based communication system, which by example is used by newly implemented smoke detection sensors.

telnet-based console

Inspired by the exposure of properties in Flightgear, the system now can be controlled using a telnet based interface and can expose properties in XML formatted UDP packets.

list of lights, dynamic loading/unloading of lights for the shader; static, flash and beacons, equipped cars with static an blue beacons



extended script-support for all object types scenery definition via special purpose language using parser/loader

Using a special language to describe a scenary it is possible to load/mix different geometry and object types. The scenary language supports mixing of all 3d formats, which can be loaded by Openscenegraph.

script-support, scripts can be loaded per (autonomous) object, trigger-based

Added script support for objects. The script engine uses a trigger based approach. Example:

#FireSimScript V0.1

trigger rnd_walk {
  enter {
    step random;

trigger approach_fw {
  condition {
    distance(role ATM) < 3.0;
    distance(role GF) < 3.0;
    distance(role EL) < 3.0;
  enter {
    disable trigger rnd_walk;
    stop animation;
    face $target;
  leave {
    start walking;
    enable trigger rnd_walk;

main {
  enable trigger rnd_walk;
  enable trigger approach_fw;

verbal communication with autonomous person by speech recognition und voice synthetisation

A student added speech recognition via the google speech recognizer (available on-line only).

Agents do speech output via speech-synthesizer.

Implemented simple interaction scheme (key-word detection) to let trainee (human) interact with agent via voice.

sky and terrain-alignment, collision detection

outside-viewThe system now support collision detection, it is no longer possible to pass through walls like a ghost.

Added terrain: agents and users now follow the terrain. Interestings side-effect: stairs can be walked.

Added sky-dome with cloud layers.

walking animation

walkingWalking animations are created by Blender. The render engines “plays” these animations.

Click to see the video.

Hint: a real firefighter will never behave like the guy in the video, its just a test!

autonomous person, multi-agent

cimg_2013-04-16-235941-00Very simple mult-agent test. The video shows independent agents changing position.

first functional multi-user

Click to see a video of the first multi-user test. The video shows the different views of the attack group members side by side.

shader update: new fog density function, halo-effects with lights, light emitting flames

A student implemented a global shader using GLSL which supports per pixel shading as well as volumetric fog.

Cave Test Setup

The render engine now supports different output device:

  • single or multiple screens
  • the Cave environment
  • head-mounted displays
  • stereoscopic output

changed to object request broker (ORB) technology to enable multi-user functionality

Using an ORB for exchanging data with multiple clients, the system becomes mult-user aware.

changed render engine to OpenSceneGraph and self mader GLSL Shader

This is a very important step, as graphics improved dramatically.





replaced wired stepping sensor by wireless accelerometer based stepping sensor

As I could get hold of some TI experimental boards, containing accelerometer sensors, a new, wireless stepping sensor could be developed.

fully implemtend simulation of Teleprobe FH40G-A Dose-Rate Counter

Detector head gets dose rate by superposition of multiple sources, described by isotop and activity.

No shielding effects are considered.

using Polhemus Patriot for head and device

Now using a polhemus patriot as tracking device for implemented gadgets like nozzle und Dose-Rate Counter.

simple extinction model

Describe the extinction of a fire by cooling with water, using the zone model.

The model discribed in german

integration of PowerGlove P5 data-glove

integration of HMD eMagin Z800 including the tracking system