Monday, June 30, 2008

Linux as RTOS

Most of the telecommunication and data-communication products use Real Time Operating Systems (RTOS). Application of a RTOS is called embedded software. Let's discuss some basic definitions and classification of RTOS and how Linux does it.

Since Linux is an open-source operating system with strong ability to support multiple processors, including MIPS, x86, PowerPC, its becoming more popular with the embedded software developers as RTOS. Because of Linux open-source nature, its constantly evolving. In this article, the standard Linux is analyzed.

What's an RTOS

An RTOS has ability to respond to an external event within a deterministic time. The performance of the real time is measured by the RTOS customers based on the information provided by the RTOS vendors to analyze the execution times for interrupts and any system tasks that are higher priority than the customer's tasks.
The "kernel" of a RTOS provides an "abstraction layer" that hides the hardware details of the processor (or set of processors) from application software upon which it runs.

Hard real-time applications have a very specific deadline. That is the RTOS must respond to application within a specific time, otherwise an unacceptable result occurs. Something blows up, something crashes, some operation fails, someone dies. Soft real-time applications usually must satisfy a deadline, but if a certain number of deadlines are missed by just a little bit, the system may still be considered to be operating acceptably.

Real Time kernel services: The kernel service includes: Task Scheduling, Intertask communication and synchronization, Timer, Dynamic Memory Allocation, and Device I/O Supervisor.
  • Task Scheduling services include the ability to launch tasks and assign priorities to them.

  • Intertask communication and synchronization services make it possible for tasks to pass information from one to another, without danger of damaging the information. That is while a piece of information is being updated by a task, another task cannot make changes to the same piece of information. This is called synchronization. RTOSs provide semaphore or mutex mechanism, event-flag or signal for synchronization.

  • Timer - Since many embedded systems have stringent timing requirements, most RTOS kernels also provide some basic Timer services, such as task delays and time-outs.

  • Dynamic Memory Allocation - Many RTOS kernels provide Dynamic Memory Allocation services. This category of services allows tasks to "borrow" chunks of RAM memory for temporary use in application software. Often these chunks of memory are then passed from task to task, as a means of quickly communicating large amounts of data between tasks. To avoid the delay associated with memory fragmentation and defragmentation , the Real-time operating systems offer non-fragmenting memory allocation techniques. For example, the "Pools" memory allocation mechanism rather than conventional "heap" memory allocation mechanism allows application software to allocate chunks of memory in different buffer sizes per pool. Typical buffer sizes are 31, 63, 127, 255, 511, 1023, 4095, and 65535 bytes. "Pools" totally avoid external memory fragmentation, by not permitting a buffer that is returned to the pool to be broken into smaller buffers in the future. Instead, when a buffer is returned to the pool, it is put onto a "free buffer list" of buffers of its own size that are available for future re-use at their original buffer size.

  • Device I/O Supervisor - These services, if available, provide a uniform framework for organizing and accessing the many hardware device drivers that are typical of an embedded system.
Other high-level components, such as as file system organization, network communication, network management, database management, and user-interface graphics are optionally added to the operating systems. These services, although much larger and much more complex than the RTOS kernel, rely on the presence of the RTOS kernel and take advantage of its basic services. In order to keep the memory consumption to a minimum, each of these add-on components is included in an embedded system only if its services are needed for implementing the embedded application.

Linux as RTOS

In general, Linux is designed for performance-limited applications, but it is not well designed for deterministic response. In a sense, Linux is not considered to be a real-time operating system because it cannot guarantee deterministic performance. Study shows that the Linux kernel, in a relatively easily constrained environment, may be capable of worst-case response times of about 50ms, with the average being just a few milliseconds. Many real-time applications require response time significantly below 1ms and within µs.

There are two main reasons for Linux poor performance on uniprocessor systems. 1) Linux kernel disables interrupts, 2) Linux kernel is not preemptible. Needless to say Linux is certainly multithreaded, supports thread priorities and provides predictable thread-synchronization mechanisms.

If interrupts are disabled, the system is not capable of responding to an incoming interrupt. The longer that interrupts are delayed, the longer the expected delay for an application's response to an interrupt. The lack of kernel preemptibility means that the kernel does not preempt itself, such as in a system call for a lower-priority process, in order to switch to a higher-priority process that has just been awakened. This may cause significant delay.

On a multi-processor systems, the Linux performance can be even worse, since the kernel also employs locks and semaphores that will cause more delays.

Using Linux in Real-Time application?

A real-time application may call almost all 208 system calls in Linux kernel, indirectly through library routines. Contention for resources, such as synchronization primitives, main memory, the CPU, a bus, the CPU cache and interrupt handling can considerably slow down an application running on Linux from its best case.

Some tricks that real-time applications have developed to improve the Linux kernel response time follows: give themselves a high priority; lock themselves in memory (and don't grow their memory usage); use lock-free communication whenever possible; use cache memory wisely; avoid nondeterministic I/O (e.g., sockets) and execute within a suitably constrained system including limiting hardware interrupts, limiting the number of processes, curtailing system call use by other processes and avoiding kernel problem areas, e.g., don't run hdparm.

Improvements to Linux Kernel

Some efforts has taken place to improve the kernel preemptibility which has resulted kernel to respond faster to applications without any need to modify the application code. The preemptible Linux kernel patch that was originally introduced by MontaVista Software and subsequently championed by Robert Love was merged by Linus Torvalds into the main linux development-kernel tree, beginning version v2.5.4-pre6, thus adding a far greater degree of real-time responsiveness to the standard Linux kernel.

Thursday, June 12, 2008

IPTV - The Software behind SDV

Introduction

In a cable network, groups of homes are connected on a common branch of coax cable. That is, groups of subscribers share access to the same downstream frequencies, and race for access to shared upstream frequencies. Where as, the traditional wireline networks are considered point to point, from a central office directly to a subscriber. Therefore, with sufficient switching capacity placed at the central office, infinite amount of content can be delivered to a single household. Switched Digital Video (SDV) is a cable technology that attempts to answer this challenge. It was designed as a cost-effective method to expand program availability.

With SDV, as with IPTV, and unlike traditional digital broadcasting, programming terminates at the hub and does not go through the network unless requested. Instead, a receiver, such as set-top-box, signals upstream to request programming, and a hub-based controller receives the request and enables the stream into the network by means of a pool of allocated frequencies. In another word SDV allows operators to switch, rather than broadcast, some channels to individual service groups. A service group is typically made up of 250 or more set-top-boxes off a given node. Channels selected for a "switched tier" are delivered via a multicast stream only when a customer in a service group selects them for viewing.
The advantage of using SDV is that cable companies have more bandwidth available to convert into Internet channels during periods of high customer demand. The cable companies can also determine which channels are in more demand and develop localized advertising mechanism for those customers.

How SDV works

Every time you change the channel in an SDV system, your set-top box engages in a complex digital conversation with the SDV network.

The following diagram depicts how SDV systems works:


1. When the customer changes the channel the Set-Top-Box sends a signal to SDV server requesting a program to view
2. SDV server sends a signal to ERM
3 & 4. ERM communicates with Edge QAM device to identify the requested channel’s frequency and pull it from SDV system’s transport section
5. ERM send tuning parameters for the requested channel’s frequency to SDV server
6. SDV server sends the tuning parameters to customer’s set-top-box, which tunes to available frequency.

SDV Software

Much of the brains of any SDV deployment are contained in the software that manages the session management (processing individual session and channel change requests from set-top boxes) and edge resources (the dynamic setting up and tearing down of a session to individual QAMs). The Quadrature Amplitude Modulation (QAM) device, encrypts the content and forwards it to the Set-Top-Box to unscramble and playback. QAM technique allows cable companies to send multiple digital signals across the same line.

In an SDV system, metadata that describes all broadcast programming is amended to indicate which programs are SDV programs. When an SDV program is selected, tuning software in the receiver sends an upstream signal. An SDV session manager receives the request and maps the program to a frequency within the allocated pool. This dynamic tuning information is returned to the receiver. If the program is already being viewed within the same subscriber group, then the task is as simple as reusing the session frequency information.

Through some centralized software systems the health of entire SDV system can be managed and reported. It can also monitor and determine the bandwidth of a sevice group or if an edge QAM is failed, it can send an alarm. BigBand recently introduced Video Management System (VMS) in this category.

SDV software products vendors

Here are some vendors that produces SDV software:

The BigBand Networks Inc., Switched Broadcast Session Server (SBSS) serves as the control plane of the SDV system. It performs two major functions: session management and edge resource management. BigBand packages both of those functions together.

The Cisco Systems Inc. Universal Session Resource Manager (USRM) is the second generation of the vendor's SDV server. It's capable of handling the edge and resource management functions together or separately.

The Motorola Inc. Switched Video Manager 1000 (SVM 1000) receive the channel change request, but work in conjunction with the company's ERM 1000, an edge resource manager.

Tuesday, June 10, 2008

NIC and beyond

NIC stands for Network Interface Card. It allows users to get connected to each other through cable or wireless. It basically allows two computers communicate with each over networking or Ethernet.

NIC supports both OSI layer1 (physical) and layer2 (data link) by providing physical address to networking medium and low-level addressing mechanism through the use of MAC address.
MAC address, which is essentially Ethernet, is a unique 48-bit address that's carried with each card capable of networking. It's stored in the card's ROM. The uniqueness of the MAC address is governed by IEEE.

You don't have to run Ethernet on your NIC attached to your PC. You may very well run Token Ring or FDDI which also are OSI Layer 2 standards of IEEE.
Most computers these days come with networking integrated by either adding Ethernet chipset on the their motherboard or connecting to low-cost Ethernet chip through PCI or PCI Express bus. However, a NIC can be plugged in, when more network interface is needed.

A NIC normally sends data at the rate of 10, 100, and 1000 Mbits per second over twisted-pair copper wire or BNC coaxial cable. Recently new vendors, like NetXen, as depicted on the right, use pluggable XFP to provide 10Gbit per second data transfer rate.

How does NIC transfers data?

NIC transfer data by one or more of the following mechanism:
1. Polling: CPU polls on NIC status to see if there data to be transferred.
2. Interrupt: NIC interrupts CPU to indicate data is ready for transfer.
3. Programmed IO: CPU alerts NIC by applying it's address on the system address bus.
4. DMA: Intelligent NIC, which has it's own CPU, uses DMA to directly access the memory.

Harware Block Diagram

The following diagram depicts a typical layer2 Ethernet 10Gbit card. It uses XFP to interface with optical line and SerDes to interface with MAC chip. The MAC chip normally interface to the host through FPGA.





XFP

XFP stands for 10 Gigabit Small Form Factor Pluugable and is a hot-swappable optical transceiver. It operates at 10 Gbit per second and typically used for 10 Gbit Ethernet. It could also be used for other high speed broadband optical applications, such as SONET/SDH.

SerDes

SerDes function is to convert serial interface data to parallel interface in high speed bandwidth, such as 10Gbits. SerDes is driven from SERrialize/DESerialize.

FPGA

FPGA is a programmable ASIC that normally provides interface between a off the shelf chip and the host. It can also provide additional functionality that are missing from a particular ASIC or when a task require faster performance.


Host Controller

The Host Controller is a bus that allows data to be transferred between NIC and Host. Depending on the transfer speed requirement and the host type, a different bus is used. For example, PCI Express bus is used to transfer data between NIC and PC, where as SPI4.2 bus is used to transfer data at 10Gbit rate to a router.

 
Directory Bin