Dr. Don Keogh - Modeling Variable Bit-Rate MPEG Sources
No abstract submitted.
Stewart Jenvey - Radio Propagation Inside Buildings
Predicting the performance of indoor radio communications requires an understanding of
the mechanisms by which radio waves propagate in, and how they interact with, building
structures and contents. Planning tools currently available include empirical prediction models
derived from large data bases of measurements in typical environments and models that
utilize ray optics and the geometry of the building to predict the radio wave propagation.
The research currently being conducted is concerned with the ray optical prediction methods
and indoor radio propagation mechanisms. It has been directed at devising methods for
measuring the parameters required by the prediction models such as reflection, transmission,
scatter and diffraction coefficients and more recently has been directed at examining the
propagation fields themselves.
Measurements of radio waves propagating inside a building at PCS frequencies have been
taken in such a manner as to permit the creation of computer animations of the radio waves
propagating in that building. These animations clearly show the radio waves propagating
through doors and windows; reflecting from walls, floor and ceiling; and diffracting from
edges such as doorframes and corridor corners. The fields are observed to behave in a very
similar manner to that predicted by a computer model based on ray optical analysis which
gives confidence in the validity of modeling the propagation in this way.
The measurements of the propagating fields were taken in magnitude and phase over an
extended region of space at sites remote from the transmitter and techniques similar to those
used in Synthetic Aperture Radar were used to measure the angle of arrival and magnitude of
principal propagating components of the field. These components are compared with the
significant components of the predicted propagating field to further check the validity of the
predictions made by the model.
The measurements for gathering this data were taken using a computer controlled
positioning mechanism that scanned a receiving antenna in a raster pattern over the planar
region of interest whilst a computer controlled vector network analyzer, which was acting as a
receiver and was phase locked to a distant transmitter, measured the magnitude and relative
phase of the field at each measurement point.
Dr. Malcolm Reid & Ali Malekzadeh - Congestion Issues in ATM Networks
As ISDN evolves towards a broadband service(B-ISDN), circuit switching techniques can
no longer efficiently support its application. Technological advances in high speed statistical
multiplexing, switching and transmission systems, as well as possible opportunities to provide
new services with broadband capabilities, have lead to the creation of B-ISDN. B-ISDN
supports connection oriented and connectionless heterogeneous services, and ATM networks
are being recommended for their implementation. A well-designed ATM network, unlike
STM, must approach congestion during peak traffic flows. However, the system should be
equipped with various methods of congestion control to maintain the desired Quality-of-
Service(QoS) for each different class of service. The role of congestion control and resource
management in an ATM network is clearly understood and thus the extra costs involved in the
utilization of congestion control and resource management can be justified.
Preventive methods of congestion control in ATM networks are a promising approach in the
early stage of implementation of a system, while reactive or dynamic methods appear to be
more appropriate with system growth. The overall system must not be over dimensioned, as it
becomes inefficient due to the overheads required in an ATM network. In such cases, ATM
may not be a better alternative to STM. Policing functions and bandwidth enforcement are the
most important features which should be incorporated in an ATM network to monitor the
performance of the source.. They are also employed to take appropriate action whenever the
connection agreement during call set-up is violated. This study intends initially to address
issues concerned with the influence of cell-level and call-level congestion control in ATM
networks.
Dr. Sajal K.R. Palit - Microwave Antennas for Mobile Satellite
Communications
My main research interests in telecommunications are in Microwave Antennas, RF Circuits
and Devices, Microwave-Optical interactions and Mobile Satellite Communications. I have
also research interests in simulation and design of analog and digital electronic circuits,
wideband amplifiers, and engineering education.
Microstrip antennas are becoming increasingly popular in mobile & satellite
communications, wireless networks and microwave sensors due to their small size, light
weight and low profile. One of their principal disadvantages is narrow bandwidth. To enhance
bandwidth several microstrip antennas were designed and constructed over the last couple of
years. For example, a novel composite patch antenna was designed using coupled
transmission line model. A remarkable bandwidth improvement from mere 2% for a single
coax feed rectangular patch to 22% by the composite patch was achieved. Recently, a dual-
band notched patch antenna has been designed which yielded impedance bandwidths of 38%
for band -1 and 27% for band-2. At present, I am working on double-notched patch for
multiband operation. Microstrip arrays are replacing dish antennas for satellite
communications. One of my students is investigating on a 8-elements microstrip arrays.
Investigations on various feed techniques are in progress. I have also pursued research on
power dividers, phase shifters and attenuators design and their integration with the antenna
system. Transmission line, cavity model, finite difference and method of moments are used
for theoretical analysis of the designed antennas.
I have also made a significant contribution to the analysis and design of dielectric and
dielectric-loaded pyramidal horn antennas, rectangular and cylindrical dielectric rods
(uniform & tapered) as feed antennas for satellite dishes using step by step approximation.
More rigorous mode matching technique is currently under investigations. Low cross
polarization levels, lower side lobes, high efficiency and high gain were achieved.
Some of my present and future research projects are: 1) A novel hexagonal horn (empty and
dielectric-loaded) as a feed antenna for satellite communications. 2) An omnidirectional
dielectric rod antenna with metallic grating for mobile communications. 3) Broadband active
microstrip antenna design. 4) Multiband microstrip antenna design. 5). Microstrip phased
array antennas. 6) Design and analysis of PC based ECG analyzer 7) An ultra wideband
cavity-backed microstrip spiral antenna for satellite communications. 8) Modeling of indoor
radiowave propagation. 9) Simulation of digital and analog circuits using Viewlogic and
Pspice. 10) Simulation and design of lownoise wideband RF amplifier etc.
Dr. Bruce Tonkin - Quality of Service Management over ATM and the
Internet
Network infrastructures are increasingly being used for a range of services from electronic
mail to voice or video conferencing. It is difficult to support all these services economically
with a single quality of service.
ATM was developed to operate in a connection-oriented manner to allow the network to
allocate resources differently for different connections. The simplest options are a circuit like
service that offers constant bit rate with a low delay, and a best effort data service. Other
options are available, but at the expense of more complexity in the end-terminal equipment
and in the switches.
The Internet protocol is now widely used for carrying information of all types. Recently it is
being used for voice and video services. The Internet protocol operates over a wide range of
network infrastructures. Some of these infrastructures have no low-level support for a variable
quality of service. There is now strong demand for mechanisms to control quality of service in
an Internet environment. One approach is to use the ATM mechanisms to support the Internet.
However most end-users do not have direct ATM connections. The alternative is to attempt to
provide quality of service within the Internet Protocol. A signaling protocol for the Internet
has been developed called RSVP, but this is losing support from Internet operators. Another
approach is to use information in the IP header and make intelligent queuing decisions within
Internet switches (or routers) to control quality of service. In this mode the Internet does no
make guarantees of service but can offer a differential grade of service.
The research is considering the various methods of controlling quality of service for
operating voice and video services over a heterogeneous network running the Internet
protocol. Network components of particular interest include Ethernet switches, backbone
ATM switches, cable modems, and ADSL interfaces.
James Kershaw - The Efficient Use of the Available Bit Rate Traffic Class in
the Wide Area Network
Asynchronous Transfer Mode was chosen to support the broadband integrated digital
services network (B-ISDN), simultaneously providing an integrated network for multiple
traffic types, and scope for bandwidth efficiency gains through statistical multiplexing.
Further efficiency gains were made possible through the adoption of the Available Bit Rate
(ABR) traffic class, which adds feedback from the network to maximize bandwidth use while
reducing risk of cell loss and fairly distributing bandwidth amongst competing sources. ABR
is assumed to work for local area networks due to the generally very small propagation and
switching delays, but to date there has been no comprehensive study of the viability of using
ABR over wide area networks. My thesis seeks to redress this problem by answering two
fundamental questions: "What is the efficiency and effectiveness of ABR as a protocol for
the wide area network?", and "How is the optimal design and operation of multi-vendor
networks to support efficient ABR implementation performed?".
I seek to answer the viability question by quantifying the performance of various application
types as they move from the local to the wide areas, and establishes guidelines for the
selection of ABR as the preferred traffic class from both a user and network operator
perspective, based upon the expected performance of these applications. From this thesis both
users and network operators will be able to make informed decisions about the costs of
supporting or using various application types with ABR.
The second concern is answered through simulation of the overall performance of multi-
vendor ABR networks under various scenarios. My thesis will give guidelines for the design
and operation of multi-vendor ABR networks to achieve near optimal performance for a wide
range of application types. The guidelines developed will be generally applicable and involve
both switch specification during network design, and parameter specification during network
operation.
Farzad Safaei - Network Architecture Planning in Uncertain Environments
This thesis deals with the problem of multi-layer network planning in an uncertain
environment. To clarify the focus of this thesis, it is essential to expand on the meaning of the
terms in italics.
The network referred to here is a large scale telecommunication Wide Area Network such as
one owned by a public network provider. By 'large scale' it is meant that the network has a
significant geographical span and customer base so that 'errors' in investment decisions can
result in substantial losses to the carrier.
By multi-layer planning it is meant that the planning problem is primarily architectural and
involves more than a single protocol layer. For example, the combined design of the transport
and switching layers in terms of the architecture of each layer and the
interconnection/interworking functions between the layers will constitute a multi-layer
planning problem. The network architecture refers to the structure of each layer in terms of its
topology, the choice of technology employed in each layer, the way each technology is
deployed, and the functional split between the various protocols when there is an overlap.
The uncertainty in the operator's environment may stem from several factors:
1. the uncertainty in the extent and penetration of services and customer demand;
2. the uncertainty in the cost of underlying technologies for network infrastructure;
3. and the uncertainty in the industry structure in terms of number of competitors and extent
of competition for a particular service or market segment.
This thesis provides an analytical framework for selecting a candidate architecture in a
particular environment when faced with a multitude of future scenarios. The architectural
planning methodology developed in this thesis will go beyond the dimensioning techniques to
address more fundamental questions in relation to network architecture and evolution.
The thesis also considers network architectures for the core and access networks and
develops several novel architectures based on the currently most promising technologies
(ATM, SDH, and photonics).
In the core network, it proposes the Big Crossconnects Architecture that belongs to the
family of flat networks. This architecture is shown to be more in tune with the cost regime
likely to be prevalent in late 90's and beyond; where the cost of operations and control (and to
a lesser extent switching) will be dominant in comparison with that of transport bandwidth.
To overcome the coupling between transport and switching utilization in flat networks, the
concept of Discordant Virtual Paths has been proposed.
The Big Crossconnects Architecture together with the Discordant Virtual Paths provide a
scalable platform that greatly simplifies higher layer functions. Based on this architecture, an
efficient escalation mechanism is devised which uses strengths of ATM and SDH in achieving
fast and cost effective restoration. The escalation mechanism is based on the proposed
Dormant VP Protection at the regional network and the VP Link Switching scheme for the
intercapital network. Architectures for QoS differentiation and algorithms for dimensioning
and reconfiguration of bandwidth for the proposed architecture have been developed.
In the access network, several survivable architectures based on the self-healing rings (SDH,
ATM, and photonic) have been studied and compared. It is shown that the ring topology does
not provide the appropriate architectural trade offs for the access cost regime. The thesis
proposes a new topology called the Self-Healing Wheel architecture with marked advantages
for a wide range of future scenarios. Self-Healing Wheels will combine the fast restoration
time of SDH with full multiplexing potential of ATM, can support multiple QoS requirements
efficiently, and provide the best features of star and ring topology working and protection
networks. Another advantage of Self-Healing Wheels is the relative simplicity of peripheral
nodes that in turn would lead to cheaper access devices.
The thesis develops mathematical and analytical techniques for dimensioning, bandwidth
allocation, and real-time operation of switched data services over the ATM/SDH rings and
Self-Healing Wheel platforms. These models take both ATM and SDH layers into account
and are not confined to optimization of a single layer in isolation.
Hock Leong Chong - Dynamic Re-routing in Response to Faulty ATM
Network Links
In connection-oriented systems, the calling party has to be connected to the called party
before any information exchange commences. Connection-oriented mode provides a high
integrity service to both the calling and called parties. It involves setting up a path connecting
the calling party to the called party. As a realistic network is not fully mesh, a path originating
from the source node may traverse through several intermediate nodes to reach the destination
node.
There may be more than one path from one node to another due to the configuration of the
inter-connected network nodes. The routing algorithm implemented by the network hence
determines which path or route is the most appropriate for a pair of nodes at the particular
time of call connection request.
If a link in the network fails, the call connections that use this link are affected. This
eventually causes cell loss that is inevitable. It is important to restore the connections as soon
as possible so that cell loss is minimized. Usually there is some spare capacity that is allocated
to respond to sudden network failures. This spare capacity is used to provide an alternative
path between the end nodes of the faulty link. However there may not be sufficient capacity to
reroute all the call connections from the faulty link.
An approach to such a problem is asking the calling party to tear down its old connection
path and set up a new path. This method may be referred to as global rerouting. In order to
reduce the reroute latency due to the reconnection between the calling and called parties, it
may be desirable to retain as much of the current resource reservation (links) as possible for
each call connection. It is only necessary to find a new path between the end nodes of the
faulty link at the expense of not choosing the best route from the point of view of the calling
party. We may say that the end nodes attempt to reroute the call connections locally. If the
end nodes of the faulty link cannot reroute all the call connections, they may request the
neighboring nodes to reroute the rest. Similarly, if the neighboring nodes are not able to
reroute the rest of the call connections, the calling parties will be responsible to set up new
paths.
The performance of these few approaches that respond to link failures under different load
conditions and link failure conditions will be investigated. Several metrics will be chosen to
give an indication of the performance. They include the success rate of rerouting and the
reroute latency.
Jeremy Lu - Internet Real-time Transport Protocols
Internet videoconferencing is getting wide spread use because of the universal connectivity
of the Internet and videoconferencing completed with software only on the Internet. There are
still problems of picture quality and bandwidth, however in time they will largely vanish. The
Internet provides a unique environment for videoconferencing with its multipoint ability by
design. There are other technologies such as Dial-up videoconferencing sessions that can only
go between two participants unless expensive multipoint hardware or services are employed.
Internet videoconferencing can really be broadcast to users with nothing more than their
computer and an Internet connection. Cheap software is used to receive a conference making
it easy to set up conferences throughout Internet and Intranet networks of users. This thesis
investigates the network standards that are in use in the Internet based videoconferencing
system including the IP multicast, H.323 standard, and the Resource Reservation Protocol
(RSVP). The second part of the thesis is focussing on a discussion of the mechanisms for
maintaining Quality of Service (QoS) in different videoconferencing software packages.
Chi Nan Lim - Voice over Asynchronous Transfer Mode (ATM)
An Asynchronous Transfer Mode (ATM) network is capable of integrating existing data,
voice and video networks. There are currently four different bearer services available under
ATM, which are the Constant Bit Rate (CBR), Variable Bit Rate (VBR), Available Bit Rate
(ABR) and Unspecified Bit Rate (UBR). However, only the first two of the bearer services
listed above (CBR & VBR) are being commonly employed to transmit voice.
Both voice and data place very different demands on the networks that carry them. Bursty
data traffic needs widely varying amounts of bandwidth and is typically tolerant to network
delays. Voice traffic on the other hand needs a small continuous bandwidth and is very
sensitive to delay which emerge as the degradation of audio quality. As such, CBR is more
commonly used to transmit voice since the peak bandwidth is always guaranteed. VBR on the
other hand permits allocation of bandwidth on an as-needed basis. On this count, CBR is not
as efficient as VBR since it reserves the bandwidth regardless whether there are any voice to
transmit.
This particular project aims to measure experimentally several pertinent parameters
governing the quality of voice transmission, such as cell delay variation, percentage of packet
loss and jitter for each of these bearer services. These parameters are to be measured
experimentally, on an ATM test-bed with a software interface. By examining these
parameters, conclusions on the degree of suitability of each bearer services corresponding to
the type of applications in question can be drawn. Finally, in order to ensure that the voice on
the receiving end would sound audible, the software written should be able to adapt to
network congestion as well.
Vivian Lee - Develop Primitives for Interfacing Multimedia with Network
using Java.
Research involves building a set of standard primitives to provide an application AEs
programming interface for multimedia. These primitives will be sandwiched between the
application and the Transport layer and it will take care of the required quality of service for
the multimedia software at the Application layer. A number of programming languages is
looked at for the implementation. C and C++ were used before and now the new
programming language, Java is being evaluated. Based on the fact that Java being an object-
oriented language and with a set of built-in classes for network programming is found to be a
better tool. It is easy to program multi-tasking applications with Java. In addition, Java has an
automatic garbage collection mechanism. All these are also important criteria for network
interface application programming.
Brian Kelly - Research directions at Telstra Research Laboratories
No abstract submitted.
Dr. Khee K. Pang - Analyses and Queuing Models of Voice/Video Traffics
This is part of an ongoing research project, Videos over ATM, carried out in CTIE for some
time now. In this talk, I shall concentrate on the analytical aspect of the queuing system while
Stuart Dunstan, in the following talk, will present general aspects of the system. We hope that
by studying the analytical models in some length will help us to understand better various
aspects of the queuing processes.
For aggregated voice traffic stream, the individual sources are assumed to be on-off sources,
and the simplest model for the multiplexed stream is the familiar birth-death Markov chain.
The complete queuing system, however, was modeled by the AMS model, which allows
parameters of interest to be evaluated explicitly: parameters such as queue length distribution,
overflow probabilities, equivalent bandwidth. It is the most basic model and its familiarity is
essential in this study. The model agrees with simulated results only when the utilization
factor are low. The system class in statistical term is labeled as M/D/1/K.
However, AMS model is only a "fluid approximation" of a package voice traffic system.
The most comprehensive model for the package voice system that we found is J. Diagle 's
semi-Markov model. The model uses two state variables {l,p} to describe the system, where l
is the queue length ( number of packages), and p is the number of active sources. Although
the process is semi-Markov, a renewal occurs at the instant of the transition, hence allows
analysis by an embedded Markov chain. The analytic results agree well with the simulated
results. The system class is labeled ?GI/D/1.
No such comprehensive queuing system has been found for video queuing model. This does
not mean that there is a shortage of video source models. In Phase 1 of this project, we have
investigated a number of video source models, including autoregressive, TES, Markov, self-
similar variable bit rate source models. But the more sophisticated source models, which take
into consideration of the IBP structure in MPEG bit stream, would not allow us to evaluate
complete queuing system readily. For this reason, we choose a simple DAR(1) model, as
described by D. Heyman, for our study.
Next we consider K such homogeneous DAR(1) sources in the multiplexed system. Using
the Chernoff-dominant eigenvalue (CDE) approach developed by A. Elwalid, we are able to
compute the dominant eigenvalue explicitly. From this result, other parameters of interest,
such as cell loss ratio and bound on the number of admissible sources, can be obtained.
James Wiles - Detecting True Motion in Digital Video Sequences
Motion estimation and motion compensation are important components of video coding.
The detection of, and compensation for, motion between successive frames in a video
sequence, allows temporal redundancies to be removed. This is one of the most significant
reasons why hybrid video coding schemes, such as ITU-T standards H.261 and H.263, and
ISO standards MPEG-1 and MPEG-2, have been able to achieve such good data compression.
The block motion estimation (BME) technique adopted by most video coding standards is to
divide each frame into blocks of 8x8 or 16x16 pixels, and find another block of pixels in a
reference frame such that the sum of absolute differences (SAD), or sum of squared
differences (SSD) is minimized. The search range for motion vectors is typically +/- 16 pixels
in both the horizontal and vertical direction; some of the standards also allow half-pixel
motion vectors, in which case bilinear interpolation is use. While this technique provides a
good "statistical" estimation of motion, it has a number of limitations. Because of its rigid,
block-based structure, it is really only suitable for detecting purely translational motion. It
also fails at motion boundaries, where blocks may cross the boundary between regions with
differing motion. Sampling and quantization of the luminance field, and of the motion vector
field, are also significant sources of errors. The BME technique provides a motion vector for
every block. It is generally acknowledged that further improvements in representing video at
very low bit rates, must incorporate a higher level of image understanding. Recent video
coding research, particularly in MPEG-4, has focussed on object-based and content-based
coding methods. These methods require object segmentation information, and true motion
estimation (optical flow) for each individual object.
My work looks at the application of BME to the detection of true motion, and analyses its
limitations and shortcomings. I propose a model for quantifying motion compensation errors
due to re-sampling and quantization, and show its use in differentiating correct and incorrect
motion vectors. Further work involves an improvement of motion representation by a
multiscalar hierarchical approach.
Stuart Dunstan - Statistical Multiplexing of Variable Bit Rate Video
This research addresses the issue of codec and network resource allocation for variable bit
rate video. An understanding of encoder and decoder buffer management, packet multiplexing
strategies, and scheduling algorithms at network nodes, is required. The work investigates the
opportunities for statistical resource sharing amongst MPEG coded variable bit rate video,
subject to an end-to-end delay constraint. It may be that peak rate allocation of network
bandwidth is the only means by which real time video can be successfully handled.
For MPEG and ITU-T video coders, the difference between the time when the bits of a
video frame are produced at the video encoder, and the time when the bits of a video frame
enter the video decoder, must be constant. Two effects work against this requirement being
the channel rate constraint and delay variation in the channel.
The literature has noted two distinct regions of performance in an ATM multiplexer. Cell
scale congestion is due to simultaneous arrival of cells in a time scale equivalent to the inter-
cell time of one source. The aggregate arrival rate is less than the output link rate. Burst scale
congestion occurs when the total arrival rate, averaged over a period greater than an inter-cell
time, is greater than the multiplexer capacity. Burst scale congestion, and hence cell loss,
depend upon the auto correlation function.
It is found that cell scale congestion is approximated by the M/D/1 model while burst scale
congestion is approximated using the fluid flow approximation.
Rate envelope multiplexing requires burst scale congestion to be negligibly small. Burst
scale congestion can be estimated using a worst case assumption in which the source is
modeled as a binomial source emitting data in peak rate bursts. Reasonable multiplex
utilization is obtained only when the source peak rate is large compared to the source mean
rate and is a small fraction of the link rate.
The concept of effective bandwidth applies to the problem of admission control in a
buffered multiplexing system. The effective bandwidth calculation expresses the stationary
buffer overflow probability as a single exponential term, in which the exponent is
proportional to the dominant eigenvalue of the multiplexing system. In a refinement to the
effective bandwidth approximation, the exponential term is multiplied by a constant
determined from Chernoff's theorem which relates to the performance in a bufferless
multiplexing system.
Isolation of traffic flows between different service classes, or connections, is achieved by
using separate virtual queues. In each outgoing link time slot a scheduling mechanism selects
one of the virtual queues from which the HOL cell is to be transmitted.
The General Processor Sharing (GPS) algorithm is an idealized fluid-flow model, in that
service is offered to sessions in arbitrarily small increments. A close approximation to GPS is
Packet by packet Generalized Processor Sharing (PGPS). In PGPS, service is offered to the
packet which would be first among packets present in the queue to finish service in the GPS
system. In contrast to PGPS, Self Clocked Fair Queuing (SCFQ) derives virtual time not from
a reference system, but from the progress of work in the system itself. The system virtual time
is regarded as the service tag of the packet currently in service.
The following strategies to provision resources for video services in a multiservice network
are proposed.
? rate envelope multiplexing - suitable for low bit rate video services
? peak rate allocate - suitable for higher rate video. Unused cell slots are available to best
effort traffic.
? leaky bucket constrained video - a leaky bucket is used to shape the video source.
Resources are allocated using an equivalent bandwidth based upon the worst case source
peak and mean bit rates.
Nada Bojic - An Object-Oriented Very Low Bit-rate Video Coding Scheme
In recent years, there has been a growing demand for a variety of very low bit-rate audio-
visual applications. Although the H.263 standard, developed to meet the demand for video
telephony over the existing telephone network, is able to provide a reasonable image quality
at low bit-rates (28 to 64 Kbit/s), it is not able to provide an acceptable image quality at very
low bit-rates (8 Kbit/s).
Like the other existing video coding standards (MPEG-1, MPEG-2 and H.261), H.263
employs a block based hybrid DCT (Discrete Cosine Transform) / MC (Motion
Compensated) coding scheme. There are a number of advantages associated with the use of
such video coding schemes. They are simple to implement, generic (in the sense that the can
be applied to various types of images) and offer quite good performance at high enough bit-
rates. However at very low bit-rates (8 Kbit/s), block based hybrid DCT / MC coding schemes
are not able to provide an acceptable image quality. Coded images suffer from unacceptable
"blocky" artifacts, due directly to the underlying block based methodology.
Given that the emerging standard, MPEG-4, developed for the purpose of coding audio-
visual objects, is heavily based on the H.263 standard, it is not expected to provide
substantially better performance than that which can already be obtained by H.263.
My research is aimed at developing an object-oriented, very low bit-rate video coding
scheme which does not suffer from the "blocky" visual artifacts associated with conventional
block based DCT / MC coding schemes by introducing an improved motion model based on
image warping.
Unlike the motion model used by conventional coding schemes which assume that all
motion is translational, the proposed scheme supports complex motion (translational,
rotational
and zooming) by virtue of the warping paradigm. This will help overcome the "blockiness"
suffered by conventional coding schemes at very low bit-rates and improve the reduction of
temporal redundancy (by means of a more realistic motion model).
The proposed scheme will operate within the framework of MPEG-4. It will operate only on
the foreground objects in a video sequence, thereby facilitating object level interactivity and
compression. The proposed scheme is specifically aimed at coding flexible (non-rigid)
objects, like the human face.
Geoffrey Tham - Evaluating Coded Video Quality in the Presence of Errors
Coded video is increasingly being used over telecommunications networks to realize
various services such as pay TV, video conferencing and distance learning. However,
telecommunications networks suffer from bit and burst errors due to physical effects and
network congestion. These can significantly affect the quality of coded video due to its
reduction of redundant information. As a result of this, various non-standardized error
concealment methods have been implemented to reduce the perceived degradation due to the
network errors. To enable telecommunications carriers to select an appropriate decoder for
use in their particular network, it is essential that a means for reliably evaluating their
performance be developed. The objective of this research is thus to be able to select decoders
with error concealment schemes that can function over telecommunications networks with
particular error characteristics.
Traditional methods for evaluation include using simple PSNR metrics or human observers.
The former does not correlate well with perceived quality whereas the latter is laborious and
yields non repeatable results. To overcome these limitations, human visual system models
have been proposed based on the data available from neurophysiology and psychophysics.
These have performed reasonably well, but they have yet to be extended to evaluate the
visible effects of coded video subject to network errors.
Current research is concentrated on obtaining and implementing a generic architecture to
test different aspects of the human visual system (HVS). A block diagram of a typical HVS
model is shown below.
The video sequence is broken up into subbands based on its spatial frequency and
orientation, and temporal frequency. This corresponds to the presence of various visual
pathways in the HVS. The contrast sensitivity function provides a measure of the variation in
contrast threshold and is caused by the center-surround receptive fields in the retinal ganglion
cells. It depends on many variables, but most HVS models are only concerned with its
variation as a function of spatial and temporal frequency. Interchannel masking and
facilitation (changes in the delectability of a 'target' in the presence of a 'mask') is modeled
as the normalization (or inhibition) of the excitatory channels with the pooled (summed)
inhibitory channels. A comparison of the errored and non-errored video sequence is then
performed, before a decision on the quality of the video is obtained through a Minkowski
summation.
Dr. Greg Cambrell - The Systematic Formulation of Electromagnetic
Theory
Electromagnetic theory underpins the propagation and transmission aspects of
telecommunications. It is also important at the device level for understanding the nature of
limped and distributed circuits and components. Yet electromagnetic theory is usually
regarded as being abstract and complicated.
Part of the reason for the perceived difficulty of electromagnetic theory is the large number
of scalar and vector field variables involved and the complexity of solving Maxwell's equation
in practical situations. Often there are alternative ways of formulating an electromagnetic
problem, each way having its own advantages and disadvantages. For example, boundary
conditions are more easily modeled with scalar potential functions rather than with vector
fields, and open radiation problems are solved more efficiently by using integral equations
rather than partial differential equations.
Over the years my research interests have centered on the systematic formulation of
electromagnetic theory with a view to finding patterns and connections which provide
economies of description and which unify various solution techniques. Examples include the
use of duality to allow the computation of both upper and lower bounds on the numerical
solution of capacitance and inductance, the merging of differential and integral equations
along an auxiliary boundary to facilitate then finite element solution of unbounded field
problems, the computation of various functionals representing parameters of interest such as
impedance, scattering and admittance matrix elements in microwave devices, and the finite
element solution of nonlinear optical waveguides.
The deeper connections in theoretical physics between the electromagnetic fields and
potentials continue to provide fascinating insight into the more practical formulations of
electromagnetic problems. For example, the so called Tonti diagrams provide a clear picture
of the possible primal and dual formulations many of which have been proposed at various
times by independent researchers. Presently I am continuing this work by exploring the use
of an alternative notation, namely, differential forms. With the collaboration of a new Ph.D.
student this year, I hope that further simplifying connections can be discovered and exploited.
Prof. Bill Brown - Telecommunications Network Performance
My involvement in telecommunications is centered on a recently acquired teaching
commitment in the third year subject 3342 (Switching and signaling) and the fourth year
subject 4347 (Telecommunications network performance). Liren Zhang developed these in
the heyday of telecommunications growth in the Department. They are important for those
students wishing to specialize in the telecommunications area. Their content includes both
circuit switched and packet switched material. There is a bit of each in both, the total being
something like:
Circuit switched networks. Hybrids and codecs. From electromechanical switches to
digital time and space switches. Pulse code modulation. Erlang traffic theory. Dimensioning
multi-stage switches. Stored program control. Signaling. Narrowband ISDN. Overload
controls for circuit switched networks.
Packet switched networks. Queuing theory. Window flow control mechanisms.
Broadband ISDN. Traffic characterization. Analysis of access control. Switching of ATM
networks.
Thus, there is a mixture of analytical and descriptive material.
In 3342, the experimental work involves simulation using COMNET and an investigation of
the Model ISDN Exchange. There are regular problem-solving tutorials. In 4347 there are
also problem-solving tutorials, as well as a major problem-solving assignment.
Terry Cornall - Speech recognition
The ability to extract meaning from continuous speech is a facility that humans take almost
for granted, and like other such abilities, such as visual pattern recognition, is a task that
proves difficult to transfer to a computer or other automated system. This talk will discuss
some of the characteristics of speech that contribute to this difficulty, and comments on signal
and information processing techniques and some of the features of speech that have been
applied with some success in attempts to overcome the obstacles of automated speech
recognition of continuous speech.
Aspects that will be discussed are:
? Phonemes
The smallest segments of sound that can be distinguished by their contrast within words.
? Syllables
Hierarchical phonetic structure of syllables
? Frequency analyses
Formants
? Continuous speech
Endpoint detection
Intonation contours
Co-articulation
Dr. David Suter - Use of motion extraction/segmentation in multi-media
and digital media libraries
Historical Film Processing: This work involves aspects of restoration and video coding of
old films. Much of the work has been carried out on a 1906 film The Story of the Kelly Gang.
Aspects of this work were funded by an Engineering Faculty small grant (1996) and a large
grant from the Australian Research Council (1997-1999). A small grant from the Collier
Charitable Trust (1996) is also gratefully acknowledged.
Optical Flow Calculation: Optical Flow is the motion of objects on the image plane as
objects (of the camera) moves. This has applications in robotics and in video coding
technology. A related issue, is the recovery of motion in "images" that come from biomedical
applications (including volumetric/3D images - CT, MRI etc.). In this latter category, we are
trying to recover the motion of the human heart as it beats. Aspects of this work (involving
biomedical image analysis) were funded by an ARC small grant (1995). With Alireza Bab-
Hadiashar, robust statistics have been applied to optic flow calculations. Aspects of this work
are funded by a small ARC grant 1997.
Fast Approximation of PDE Splines: PDE Splines generalize the thin-plate Splines. They
try to "build more physics" into the modeling of data. A component of this research involves
the study of vector splines (good for modeling fluid flow, heart motion, meteorological data
etc.). This is funded by an ARC large grant (1995-97). With Fiona Chen, we have
investigated multiple order smoothness constraints, fast approximation methods, and
applications (including cardiac motion modeling).
Prof. Greg Egan - Distributed Video Servers
There is a growing interest in video servers for a range of applications. Applications range
from the production of audio visual material (including the increasing use of digital special
effects and animation), viewing of contemporary materials across cable and satellite networks,
to browsing archival material from large national collections.
The access characteristics vary from highly localized, with a relatively small amount of
material being accessed repeatedly in an intensely interactive manner, through to sparse
access where material may not be accessed again for days or months. It appears clear that
number of assumed user behavior in the literature bear little resemblance to reality, and for
some users the behavior has not been explored at all.
The user requirements dramatically affect the architecture of the video servers, secondary
and tertiary storage, and the networks that connect them to the users. A one-shoe fits all
approach will not lead to an economically viable solution. The work in CTIE offers the
opportunity to observe the behavior of different classes of real users on medium scale
networks and servers and to extend this behavior to the modeling of the architectures likely to
be found in the future.
Peter Hawkins -
No abstract submitted.
Andrew Amiet -
No abstract submitted.
Janet Baker - Monash Copyright and Royalties Unit -the PAML project
This presentation will discuss the progress of, and interim results of the research conducted
by the Monash Copyright and Royalties Unit into the experiences and reactions of the
Performing Arts community to the establishment of a Performing Arts Media Library
(PAML) using digital recording technology and providing the facility for 'video on demand'.
At this stage of the project the following objectives have been achieved:
? a survey and annotation of national and international print based material has been
completed
? a survey, assessment and annotation of national and internet based material, including
related websites of relevant organizations, has commenced and will also be available
soon in electronic form
? a clarification of the issues surrounding the wider intellectual property, copyright and
royalties environment as it relates to the Performing arts arena has been reached
At this stage of the research it is clear that the whole area is in a state of transition. For the
community, for leal experts, the collecting agencies and for governments , here in Australia
and overseas, it is increasingly clear that a legal regime designed to cope with both a pre-
digital, pre-internet and basically broadcast electronic delivery age; oriented,
moreover, to the rights of writers and composers, and increasingly to producers, struggles to
come to terms with an environment coping simultaneously with the impact of that
digitalization, the move from broadcasting into a range of transmission forms.
As well, it premature to talk about industry practice and experience. The creative and
diverse range of the Performing Arts community is reflected in the responses and experiences
that have been discovered during the industry.
Nevertheless, some key themes have emerged.
? The need for a new legal regime to take account of the technological changes, and the
consequent internationalization of reach, impact and compensatory regimes
? the concerns for artistic integrity and moral rights that play such an important role in the
community 's concerns Sigma the development of dominating paradigms from vanguard
organizations
? the issue of a performers rights as compared to writers and composers. The latter
receiving greater protection and compensation under the current compensatory
arrangements, while the former is more vulnerable to exploitation, exposure and even
misrepresentation of performance form digital enhancement, plagiarism' and mimicking,
and even unfair selection of personal performance for an overall ensemble considerations
? the balancing of archival preservation of ephemeral art forms as opposed to the
consideration of income
? the complexities in the management of a compensatory payment in the 'video on demand'
context
? the genuine enthusiasm of the community to embrace new technology if it does not
distract form what is seen as the core task, the 'being there' live performance, and there
are adequate funds and production expertise to fund such recording
? the concern of the community to react positively rather than negatively to the
internationalization of the artistic community, and a recognition that talk of a global
economy reflects a reality.
Assoc. Prof. Jim Breen - The IP protocol - Has it a future?
No abstract presented.
Bernd Gillhoff - Real-time Services over Internet Protocol Networks
Internet networks guarantee the correctness of data transferred from one device to another.
Internet networks give no temporal guarantees for this data transfer. Real time data such as
video and audio, games, etc. are reasonably tolerant to transmission errors, but place stringent
requirements on timing such as latency and delay variation. If a packet is not delivered on
time, then it is useless.
This talk will outline some of the real time services being carried on the Internet and how
this timing is being resolved.
Dr. Bruce Tonkin - Proposal for New Center in Telecommunications
Services
This talk will briefly describe a proposal to establish a center for telecommunications
services at Monash University. The center will be jointly managed by the Department of
Electrical and Computer Systems Engineering and the School of Computer Science and
Software Engineering. It will cover engineering and computing aspects of
telecommunications services development.
Assoc. Prof. Henry Wu - Multimedia signal processing and communications
The talk defines the research and applications areas of multimedia computing and
communications (MMCC). It describes a number of projects conducted and outlines key
expertise areas in Digital Systems associated with multimedia signal processing and
communications.
William Morton - The Digital Media Library project
In early 1996, Cinemedia (then known as the State Film Center of Victoria) approached
CTIE and Silicon Graphics as research partners in the Digital Media Library project.
Cinemedia already manage a collection of over 10,000 video titles in their film library. They
are available mostly as VHS tapes, but also as 16mm film and other film formats. In the past
Cinemedia have distributed a catalogue with details of the collection to its members. The
members of the library can then order a copy of particular films. The films are then sent to the
nearest library for collection by the member, or sent in the post.
Cinemedia have now put the catalogue and booking system on the Web so that orders can
be placed and dispatched electronically. The next logical step is for members to be able to
browse the catalogue and then simply "click" on a title, which can then be streamed direct to
their PC.
The Digital Media Library project has involved the construction of a fully scalable video on
demand system, a browsing and booking interface and content/rights management software.
CTIE has been involved in many facets of the project:
The Video Coding Group has encoded 200 hours of video into MPEG format. Groups of
naive and expert viewers were used to determine appropriate quality levels for this type of
service and a volume encoding facility was set up.
Silicon Graphics have provided large scale video server hardware and software and SGI
engineers have worked closely with CTIE engineers to design the server and storage system
required for the Digital Media Library.
ANSPAG have provided expertise and support in network design and implementation,
including commissioning a broadband ATM link to Cinemedia in the city and connection to
both Telstra and Optus cable modems.
The challenges for the project during 1998 will include the development of distributed
server systems that will support clients who do not have broadband ATM links to Monash
University! Large scale off line storage management systems will need to be investigated and
hopefully incorporated into the system. "Live" users will need to be signed up initially on a
trial basis, but with a view to migration to a full "pay for view" system.
The Digital Media Library team is also actively investigating the possibilities for combining
the DML and McIVER projects. This would be with a view to commercialization of the
content and rights management software that would appear to be world class at this stage.
There has been some interest from overseas companies.
Dr. Bin Qiu - The application of fuzzy logic and neural networks in ATM
congestion control
Among the traffic classes specified by the ATM Forum TM4.0 and ITU-T I.371, only ABR
class has the flexibility of adjusting its source delivery rate according to network situation.
ABR source generates resource management (RM) cells that will be used by ATM switches to
inform their current congestion status. When RM cells are looped back to source by the
destination, well-defined procedures make analysis on the messages in those RM cells and
react accordingly. ABR class was originally designed for pure data traffic. However, recent
proposals sugge4sted that it could also be used for real-time service. With the establishment
of a low cost and efficient congestion avoidance and control strategy, ABR can be the most
popular service category. Rate-based, closed-loop congestion control has been recommended
by the ATM Forum and ITU as the framework for ABR traffic control. Minimum
requirement for ATM switches to provide ABR service is a single bit explicit forward
congestion indication (EFCI), while up-to-date congestion management schemes normally
handle multi-byte explicit rate (ER) information which require proportional rate control
algorithm (EPRCA) and explicit rate indication for congestion avoidance (ERICA) with
max0min fairness. An essential issue for these and any future congestion control schemes is
to generate fair and appropriate ER values that are used by the sources to adjust delivery rates.
ATM network supports much higher rates than earlier Frame Relay network and LAN. The
signal propagation delay is therefore more detrimental to for closed-loop congestion control
because the current ER information can only have a delayed impact on sources, which causes
buffer management problems that lead to excessive QoS degradation or network under-
utilization. The problem is more serious in wide area network (WAN). Special
considerations have to be given to the control loop delay. One possible solution is to segment
the end-to-end control loop into smaller hop-by-hop loops, and virtual source (VS) and virtual
destination (VD) are implemented at the nodes. This approach involves the separation and
reassembly of all connections and the inclusion of all source/destination related functions at
switch. Another approach is traffic prediction at switch. The predicted traffic intensity or
queue boundary at a round-trip delay (of the control loop) ahead can be used to estimate more
precise ER values.
Existing approaches of prediction usually make use of low order linear autoregression.
Linear autoregression model provides accurate prediction only if the stochastic process has
special characteristics, which are not normally met by teletraffic patterns. As a result,
prediction error increases. Fuzzy logic has the ability to adapt to different situations. It is a
novel approach to apply fuzzy logic for traffic prediction. Simulations show that a fuzzified
Kalman predictor outperform linear AR predictors in all QoS aspects. The actual rate
generator can also be fuzzified to further improve the results. In order to achieve real-time
operation, an artificial neural network can be trained to implement the fuzzy logic predictor.
Linda Wilkins - Eureka! Or have you got that? (A loose translation of the
Greek)
Promoting business possibilities based on new technology is generally seen as outside the
province of technology managers while business managers do not feel adequately informed to
manage this function themselves. Consequently, there frequently exists a gap between
problem solvers and those with a need for solutions.
Technology managers are most strongly focused on the control of inputs while the greatest
benefits of technology relate to outputs. For example, increased organizational efficiency and
enhanced strategic capabilities.
Recognition of these issues in one corporate organization led to the commissioning of a one-
day program for their information technology managers. The objectives of the program
included:
? Increased understanding of how to present technical opportunities in the commercial
context
? Familiarity of what a specific technology can do and awareness of its limitations
? Ability to anticipate a range of questions from an informed client
? Increased skills in establishing and managing relationships with clients
? Increased awareness of first strike and sustainable advantage
The program was developed collaboratively by a senior member of the ANSPAG group
with a strong administrative background, a senior research engineer and a lecturer in
communication skills.
The introduction of new technology to the IT division of a large organization was presented
in the context of a recognized need to improve the liaising skills of technical managers
working with external clients. Before undertaking competitive tendering, the managers were
expected to familiarize themselves with what a specific technology such as video on demand,
could do. They were also expected to have a sound understanding of how to present technical
opportunities to business managers working within tight budgetary constraints. Information
about video on demand was presented in terms of 'selling' outcomes. The group presentations
required participants to focus on bridging the communication gap between technical and
commercial managers. An outline of key features in the 'selling' style of the competing teams
from the IT division leading to selection of a winner concludes this presentation.
Bala Kumble - Network Services at Telstra Research Laboratories
No abstract submitted.
Dr. Jean Armstrong - ODFM
No abstract submitted.
Dr. Arkady Zaslavsky - Mobile Computing @ Monash University
In recent years, mobile computing has become the focus of vigorous research efforts in
various areas of computer science and engineering. These areas include wireless networking,
distributed systems, operating systems, distributed databases, software engineering,
applications development, just to name a few. Mobile computing is associated with mobility
of hardware, data and software in computer applications. Mobile computing has become
possible with convergence of mobile communications and computer technologies, which
include mobile phones, personal digital assistants (PDA), handheld and portable computers,
wireless local area networks (WLAN), wireless wide area networks and wireless ATMs. The
increasing miniaturization of virtually all system components is making mobile computing a
reality. Mobile computing - the use of a portable computer capable of wireless networking - is
already revolutionizing the way we use computers.
The combination of networking and mobility will engender new types of information
systems. These systems can support collaborative environments for impromptu meetings,
electronic bulletin boards whose contents adapt to the current viewers, lighting and heating
that adjust to the needs of those present, Internet browsing, hotel and flight reservation,
navigation software to guide users in unfamiliar places and tours, wireless security systems,
wireless electronic fund transfer point of sale (EFTPOS) systems, remote E-mail, enhanced
paging, wireless facsimile transmission, remote access to host computers and office LANs,
information broadcast services, and law-enforcement agencies, to name just a few
applications.
The presentation focuses on research projects being carried out in the MOBIDOTS (Mobile,
Distributed and Object Technologies and Systems) research group. The projects include:
? Data replication in mobile computing environments;
? Transaction management under mobility of hosts;
? Wireless network interoperability, gateways and mutual support;
? Mobile and distributed objects;
? Data intensive application in mobile computing environments;
? Identification, connections and disconnection handling;
? Query optimization in mobile computing environments;
? Building mobile computing research infrastructure, etc.
One of the potential applications of wireless network technology is also demonstrated. The
group has successfully set up a wireless network at the Melbourne Convention Center to
provide Internet services to the delegates of TOOLS-Pacific'97. The wireless network was
connected to Monash computer network using point-to-point Aironet wireless bridges.
The presentation concludes with the discussion of existing national and international
collaboration. Future plans are also presented.
Philip Branch - The McIVER project
No abstract submitted.
Jason But - McIVER Software Development
McIVER initially started out as a research project into the feasibility of the application of
Video on Demand systems. The primary aims of this project were to:
? Examine the feasibility of setting up a Video on Demand service.
? Work through the technical problems involved with setting up and maintaining a Video
on Demand service.
? Develop an intuitive and useful interface through the co-operation of users.
? Research into how a Video on Demand service is really used by gathering and analyzing
statistics from usage logs.
While research still continues in the development of the user interface and also in gathering
statistical data of use of the service, most of the major technical problems have been solved.
As such, the McIVER project had matured to a stage where it was ready to be developed into
a commercial product.
In order to commercialize McIVER, it was necessary to examine exactly what it was about
McIVER we could commercialize, and also to ensure that we did not waste development time
by producing a product that was already available on the market. In this we looked at what we
had learnt from both the McIVER project and the subsequent Digital Media Library project.
The first and most obvious marketable product we had developed was the user interface for
accessing and reviewing the Video assets. This included both the user movie viewing
software and the mechanism for providing an asset catalog on the web.
The second most obvious asset was our acquired knowledge and expertise in setting up a
Video on Demand service. This is a bit more difficult to market and we realized that this had
to be packaged in a way to simplify the management and installation of Video on Demand
systems.
Finally we looked at where there were gaps in the emerging world of Video on Demand
systems, to see if there was some functionality obviously lacking in the many emerging
systems provided by different manufacturers. Whilst many manufacturers had developed
systems for actually delivering the Video assets, the thing that stood out most of all was the
lack of an asset management system which allowed for easy management of the assets
installed on the Video server while also providing a great deal of flexibility in securing the
viewing of these assets by authorized users.
So was born the McIVER Video on Demand solution, a system to provide a single generic
interface to the management of assets on a range of supported Video servers along with a
highly developed user interface to view these assets.
Dr. Raymond Li - Business Multimedia, collaborative learning and CD-
ROM/Internet hybrid strategies
The World Wide Web now starts to influence the way that people are conducting their
business. Multimedia, however, has not been adopted in Industry as widely as it should. This
talk will describe the work that has been done in promoting the use of multimedia in business
in Australia and particularly in the use of PCs as the platform for cost-effective delivery of
fast moving technologies such as Multimedia and Internet.
Narrow bandwidth and variation in latency of existing networks have hindered the effective
delivery of temporal data streams over the Internet. The predicted exponential growth of the
number of new entrants to Internet can easily outstrip the promise that offered by solutions
such as ATM. We will discuss the use of CD-ROM/Internet hybrid model to address the
problem.
Multimedia technology provides support for the "Learner Centered Instructional Model"
and collaborative technology provides support for the "Learning Team Centered Instructional
Model". Few tools are now available on the market, which can handle both models
simultaneously. We will outline development work in the areas of "Distributed Learning" and
"Just-in-Time Training" that embraces both models.
There is a lack of tools available to help multimedia application developers with the design,
in particularly the conceptual design, of their applications. Paper-based storyboards or
computerized scratch pads are often used to facilitate the communication of ideas amongst
project team members. A three-layer storyboard model will be presented and project
management issues of multimedia application development will be discussed.
Daniel Grimm - The eMERGE-Monash University Multimedia Delivery
Systems Project
The eMERGE -Monash University joint-project aims to provide a reference site for
information on multimedia delivery via the Internet. Multimedia delivery systems (audio and
video streaming and non-streaming) have been investigated and where possible demonstrated.
Reports produced under the project have been made available by the
www.ctie.monash.edu.au/emerge/multimedia website. The site also contains information on
standards that relate to multimedia delivery, as well as references to coding and delivery
techniques.
Investigation have mainly focused on multimedia streaming solutions for low bit-rates (the
Internet), and there is currently a demonstration server available with the Microsoft NetShow
streaming server, as well as the Vxtreme server. Very few of the video streaming servers are
truly adaptive, most re-sending and/or losing packets under congested network situations.
Related to the delivery of multimedia online is the Internet videophone applications.
Investigations have been conducted to evaluate relative performance and advantages of
various software and its applicability to real life situations.
Dr. Mahbub Hassan - Traffic Management for Intranets Connected to ATM
Networks
An intranet is the internal information superhighway, built from the well established
Internet protocols (TCP, IP etc.), within a corporation or an organization. Such intranets have
enormous benefits over proprietary networking as it provides ready access to the global
Internet which connects millions of computers and databases all over the world. Corporations
world-wide are rapidly deploying intranets for their organizations.
ATM is a high-speed communication technology recently standardized to build the next
generation of telecommunications network. Available bit rate (ABR) service of the public
ATM network is a low cost, efficient service primarily designed to support data
communications and other non-real time traffic. It is expected that in the near future, many
geographically distributed Intranets will be interconnected via the ATM ABR service.
Sophisticated traffic management mechanisms have been standardized for the ATM ABR
service. Unfortunately, such mechanisms merely "shift" any congestion within the ATM
network to the edge of the network and in this case to the intranet-ATM gateway.
Traffic congestion at the intranet-ATM gateway due to sudden drop of ABR bandwidth can
cause high packet loss and may seriously degrade the performance of the Intranet. Efficient
traffic management mechanisms need to be implemented within the Intranet to control the
congestion at the access-gateway.
Our current research focus is to investigate, propose and implement a suitable traffic
management mechanism for the IP-based Intranets to efficiently control the congestion at the
intranet-ATM gateway. A future direction would be to extend this work for multi-service
Intranets built from IPv6, the next generation of Internet protocol.
Brett Pentland - Network Testbed
ANSPAG has, over the past two years, developed a Network Testbed equipped with more
than $2.5 million worth of ATM switches, video servers, ATM test gear, and assorted other
equipment.
The Network Testbed is in place to support the work undertaken by ANSPAG and the
CTIE. It provides an advanced network on which to test new high quality applications, as
well as providing connections into a number of different wide area networks. These include
Telstra's new "Accelerate ATM", cable modems from both Telstra and Optus, and in the near
future, internet connections through tradition public phone lines and ISDN to allow
comparison of application performance over a number of different networking technologies.
Supporting this networking equipment is a number of ATM and Ethernet analyzers that
allow network traffic to be examined in detail. Extensive protocol decodes, as well as cell
and packet timing information, allows complex interworking issues to be resolved.
This presentation will describe the Network Testbed and its early development through to
its current state and planned extensions over the coming months.
Neil Clarke - ATM Network at Monash University
No abstract submitted.
© Copyright 1998 CTIE - All Rights Reserved - Caution
Authorised by the Ctie Webmaster
Last updated February 24th, 1999
Maintained by ctie@ctie.monash.edu.au