Fig 4 - uploaded by P. Duhamel
Content may be subject to copyright.
PHY packet format in standard  

PHY packet format in standard  

Source publication
Article
Full-text available
This paper presents an enhanced permeable layer mechanism useful for highly robust packetized multimedia transmission. Packet header recovery at various protocol layers using MAP estimation is the cornerstone of the proposed solution. The inherently available intra-layer and inter-layer header correlation proves to be very effective in selecting a...

Similar publications

Conference Paper
Full-text available
This paper presents the research and development process of an integrated multimedia conference system named "BK Meeting Anywhere-BKMA" and related issues. The system seamlessly integrate Internet multimedia and telephony network for both real-time and non real-time communications. One of the most important features of the system is the successfuln...

Citations

... The header recovery mechanism is illustrated in Figure 1.8 and applied to the PHY and MAC layers in (Marin et al., 2010). We can see that at each layer of the protocol stack, the header uses information from upper and lower layer fields as well as the previously decoded headers from each layer. ...
... It is thus not necessary to recover those fields, but they can still be protected by a checksum or a CRC and must be taken into account in some particular cases. Figure 1.8 Illustration of the header recovery correction based on intra and inter-layer redundancies (Marin et al., 2010). ©2010, IEEE ...
Thesis
Video content transmission constitute the main category of data transmitted in the world nowadays. The quality of the transmitted content is ever increasing, thanks to the deployment of networks able to support huge traffic loads at high speeds, along with strategies to reduce the amount of data necessary to carry video information, based on more efficient video encoders. However, the quality of the video stream perceived by the end user can be greatly degraded by transmission errors. In fact, a packet can either be corrupted or lost during the transmission due to channel impairments, which result in missing video information that must be recovered. Several strategies exist to recover such information. Retransmission of the damaged packet can be performed. However, this option is not always valid under real time constraints as in video streaming, or to avoid increasing the global network load. To recover missing information, error correction methods can be applied at the receiver’s side. In this thesis, we propose error correction methods at the receiver’s side based on the properties of the widely used error detection code Cyclic Redundancy Check (CRC). These methods use the syndrome of a corrupted packet computed at the receiver to produce the exhaustive list of error patterns that could have resulted in such syndrome, containing up to a defined number of errors. We propose different approaches to achieve such error correction. First, we present an arithmetic-based approach which performs logical operations (XORs) on the fly and does not need memory storage to operate. The second approach we propose is an optimized table approach, in which the repetitive computations of the first method are precomputed prior to the communication and stored in efficiently constructed tables. It allows this method to be significantly faster at the cost of memory storage. The error correction validation is performed through a two-step process, which cross-checks the candidate list with another error detection code, the checksum, and then validates the syntax of the encoded packet to test its decodability. We test these new methods with wireless transmission simulations of H.264 and HEVC compressed video content over Wi-Fi 802.11p and Bluetooth Low energy channels. The latter allows the most significant error correction rates and the reconstruction of a near-optimal video even when the channel’s quality starts to decrease.
... Past work has shown in simulation that UDP-Lite can decrease loss rates for multimedia applications by deeming partially damaged data acceptable [15,27]. Like link-layer coding techniques that implement packet and header correction from redundant information [26], these techniques' tolerance of damaged payloads can mostly preserve application-layer quality metrics while reducing retransmission rates. ...
Article
Integrity checking is ubiquitous in data networks, but not all network traffic needs integrity protection. Many applications can tolerate slightly damaged data while still working acceptably, trading accuracy versus efficiency to save time and energy. Such applications should be able to receive damaged data if they so desire. In today's network stacks, lower-layer integrity checks discard damaged data regardless of the application's wishes, violating the End-to-End Principle. This paper argues for optional integrity checking and gently redesigns a commodity network architecture to support integrity-unprotected data. Our scheme, called Selective Approximate Protocol (SAP), allows applications to coordinate multiple network layers to accept potentially damaged data. Unlike previous schemes that targeted video or media streaming, SAP is generic. SAP's improved throughput and decreased retransmission rate is a good match for applications in the domain of approximate computing. Implemented atop WiFi as a case study, SAP works with existing physical layers and requires no hardware changes. SAP's benefits increase as channel conditions degrade. In tests of an error-tolerant file-transfer application over WiFi, SAP sped up transmission by about 30% on average.
... 验对象来验证 BF-CRC 算法的有效性,以 Matlab 2010b 为 实 验 平 台 进 行 仿 真 , 最 后 将 其 性 能 与 Robust CRC 算法和 Min DIS 算法进行比较。设定 协议中包括 8 bits 的固定字段 K , 8 bit 的未知字段 U , 30 bit 的不关心字段O 以及利用 CRC8 的校验 字段C 。设物理层传输的信号形式为 DBPSK,并 能提供置信度最低的 δ 个比特的对应位置。虽然 Robust CRC 算法提到利用的软信息进行纠错,但 考虑到计算量等实际问题则实验中依旧采取硬判决 信息进行处理与分析。在测试向量长度 δ 取不同值 条件下,实验结果如图 3 所示。 由结果可知,当设置的测试向量长度 δ 较小时, BF-CRC 算法的纠错性能较差,在信噪比为 7 dB 时其首部错误率为 5 (ID)、生存周期(TTL)和校验和(Checksum),但不 会影响到整个链接以及传输的正确性,因此定义为 不关心字段O (根据校验的特性可知 Checksum 中 其实也具有冗余信息,限于时间本文在此暂不予讨 论)。 综上所述,对网络中数据首部纠错最终归结为 对源 IP 地址的容错估计。 在同时进行多个下载业务 时,接收数据的源 IP 地址具有标识数据流的作用, 因此根据 CRC 校验字段(FCS)对源端 IP 地址进行 纠错具有十分重要的意义。 为了简便,特对 802.11 的无线 WIFI 信号进行 简化,忽略扩频、信道编码等技术,并设信号的类 型为 DBPSK 信号。在具体协议中,CRC 校验字段 (FCS)长度为 4 Byte,因此正如前面举例提到的如 果直接采用 Robust CRC 算法, 则需要 32 2 个 double 类型的存储单元(大约为 16 GByte)来存储校验状态 概率,如此的存储开销是难以承受的。因此文献[8] 中 提 出 次 优 的 Robust CRC 算 法 (sub-Robust CRC),将 FCS 字段分成 4 个 8 bit 的被认为独立的 校验字段,此时存储需求降为 ...
Article
For the protocol headers of wireless network data prone to errors, this paper puts forward with a bit-flip subset restriction header recovery algorithm after studying the one based on Cyclic Redundancy Check (CRC). A constraint subset of the received vector centric is set up to narrow the search space by exploiting the confidence information of each bit, overcoming the defect of high complexity of the former header recovery algorithm. Then, the theatrical analysis and experimental verification about the value range of the test vector's length are done combining the models of wireless signal and wireless channel. The simulation results show that this method can maintain the well performance with a low computing cost, adjusting the test vector's length towards wireless signals with different Signal to Noise Ratio (SNR).
... However, we are making use of the joint source and channel decoding strategy (see e.g. [5], [6]) which intends to make the best use of the received signal, whatever its quality. Such schemes are able to process the payloads at each layer, even in the case where some errors occur due to wireless transmission. ...
... The protocol mechanism is modified only inside the receiver and combines the two robust recovery approaches related to headers and payload application layer. Header recovery is not addressed in this paper, we refer to [6] and [7] for that matter. We assume that the headers of the transmitted packets at various layers are available without errors and that soft values of the payload are available at the application layer even if the 2 corresponding hard decisions are corrupted. ...
... Here we extend these results to the full deflate standard by including the Huffman code, and provide a full integration with channel decoding algorithms via iterations. 6 ...
... Deux méthodes complémentaires existent. La première exploite la présence d'un bloc de CRC dans les messages AIS qui transporte une partie de l'information du message et qui peut donc être utilisée comme une source de redondance [2]- [3]. La seconde méthode utilise un autre type d'information sur les messages. ...
Presentation
Full-text available
Ce travail présente l’analyse et la proposition d’architectures d’une application de GNSS, AIS (Automatic Identification System) qui permet d’assurer le suivi du trafic maritime, mais aussi qui permet d’envoyer une alerte de détresse par AIS-SART. Il présente aussi une solution permettant la réception des signaux AIS par satellite pour étendre sa zone de couverture. Ces solutions ont été proposées après une profonde analyse des applications GNSS tels que le SMDSM, le VMS, le LRIT, l’AIS utilisées dans le domaine de la navigation maritime. L’objectif principal de l’étude sur cette application vise son utilisation sur l’ensemble des embarcations de la pêche artisanale et certains navires au Sénégal pour la sécurité et le développement dans le secteur de la pêche. L’étude des solutions proposées s’inscrit dans le cadre du projet AiA mis en place par l’EU et la GSA, dans le but de créer un lien concret entre l’Europe et l’Afrique dans le domaine de la navigation par satellite, le positionnement et les applications connexes, se concentrant en particulier sur la diffusion de la sensibilisation, le transfert des connaissances de base et d’éducation.
... Tout au long de ce travail, nous supposons qu'un tel schéma trans-couches est implémenté, ainsi que des mécanismes de décodage robuste des entêtes des paquets reçus, pour permettre l'extraction des informations provenant des couches protocolaires basses [100,101]. ...
... Classical error detection mechanisms (CRCs or checksums) at lower protocol layers do not allow corrupted packets to reach the upper application layer. Implementing JSCD techniques at the APL layer needs the use of permeable protocol layers at the receiver 62 3.1 Conceptual JSC Video Decoder side [75,42,101]. Such mechanisms require robust header decoding techniques [101] and transmission of the bit soft information or reliability measures (coming from the channel decoders at physical layer) to the upper protocol layers, as detailed in [117]. ...
... Implementing JSCD techniques at the APL layer needs the use of permeable protocol layers at the receiver 62 3.1 Conceptual JSC Video Decoder side [75,42,101]. Such mechanisms require robust header decoding techniques [101] and transmission of the bit soft information or reliability measures (coming from the channel decoders at physical layer) to the upper protocol layers, as detailed in [117]. Reliable recovery of the various headers involved in the protocol stack may be ensured by employing joint protocol-channel decoding techniques at the receiver side [101,42]. ...
Article
This thesis aims at proposing and implementing efficient joint source-channel coding and decoding schemes in order to enhance the robustness of multimedia contents transmitted over unreliable networks. In a first time, we propose to identify and exploit the residual redundancy left by wavelet video coders in the compressed bit streams. an efficient joint-source channel decoding scheme is proposed to detect and correct some of the transmission errors occurring during a noisy transmission. This technique is further applied to multiple description video streams transmitted over a mixed architecture consisting of a wired lossy part and a wireless noisy part. In a second time, we propose to use the structured redundancy deliberately introduced by multirate coding systems, such as oversampled filter banks, in order to perform a robust estimation of the input signals transmitted over noisy channels. Two efficient estimation approaches are proposed and compared. The first one exploits the linear dependencies between the output variables, jointly to the bounded quantization noise, in order to perform a consistent estmiation of the source outcome. The second approach uses the belief propagation algorithm to estimate the input signal via a message passing procedure along the graph representing the linear dependencies between the variables. These schemes ares then applied to estimate the input of an oversampled filter bank and their performance are compared.
... In [2] the cyclic redundancy check (CRC) contained in the AIS messages is used as a source of redundancy to correct transmission errors in presence of bit stuffing. Other solutions were proposed previously (see [2], [3], [4], [5] and references therein) to correct errors by using the CRC as redundancy, and not only as an error detection tool, as it was primarily conceived. However, these methods cannot be used in presence of bit stuffing. ...
Conference Paper
Full-text available
This paper addresses the problem of error correction of AIS messages by using the a priori knowledge of some information in the messages. Indeed, the AIS recommendation sets a unique value or a range of values for certain fields in the messages. Moreover, the physics can limit the range of fields, such as the speed of the vessel or its position (given the position of the receiver). The repetition of the messages gives also some information. Indeed, the evolution of the ship position is limited between messages and the ship ID is known. The constrained demodulation algorithm presented in this article is an evolution of the constrained Viterbi algorithm (C-VA). It is based on a modified Viterbi algorithm that allows the constraints to be considered in order to correct transmission errors by using some new registers in the state variables. The constraints can be either a single value or a range of values for the message fields. Simulation results illustrate the algorithm performance in terms of bit error rate and packet error rate. The performance of the proposed algorithm is 2 dB better than that obtained with the receiver without constraints.
... However, we are making use of the joint source and channel decoding strategy (see e.g. [5], [6]) which intends to make the best use of the received signal, whatever its quality. Such schemes are able to process the payloads at each layer, even in the case where some errors occur due to wireless transmission. ...
... The protocol mechanism is modified only inside the receiver and combines the two robust recovery approaches related to headers and payload application layer. Header recovery is not addressed in this paper, we refer to [6] and [7] for that matter. We assume that the headers of the transmitted packets at various layers are available without errors and that soft values of the payload are available at the application layer even if the 2 corresponding hard decisions are corrupted. ...
... Here we extend these results to the full deflate standard by including the Huffman code, and provide a full integration with channel decoding algorithms via iterations. 6 ...
Article
Full-text available
This paper proposes an algorithm for the robust reception of compressed HTML files transmitted over a noisy mobile radio channel. Both source encoders and transmission systems are assumed to be standard compliant. The source encoder follows the HTTP1.1 protocol specifications, i.e. the HTML files are encoded by the deflate algorithm, a combination of Lempel-Ziv and Huffman algorithms. The transmission scheme follows IEEE 802.11a (and 802.11n) standard as an example. The proposed receiver is based on an iterative joint source-channel decoding approach. The Soft-Input Soft-Output outer source decoder is based on a sequential M-algorithm, which has been modified to improve the decoding performance by exploiting the specific grammatical and syntax rules of (i) Huffman codes; (ii) Lempel-Ziv codes; and (iii) HTML language. Simulation results following the IEEE 802.11a (and 802.11n) standard over additive white Gaussian noise and Rayleigh fading channels show that the proposed receiver drastically reduces the number of errors occurring in the received HTML files compared to the classical receivers. An EXIT chart analysis illustrates some properties of this combination of source and channel decoders.
... JPCD techniques were first used to improve the efficiency of various layers of the protocol stack. For example, compared to classical approaches, more reliable header recovery may be performed [8], or aggregated packets may be more efficiently delineated [9]- [13]. ...
Article
Full-text available
Channel decoding makes use of redundancy in the coded bits. However, any redundancy in the information bitstream can also improve the decoding process. This paper shows that a careful examination of communication standards (including the headers added by the network layers) exhibits some fields which can be considered as redundant, either based on already received packets, or on the standard. The efficient use of this knowledge in the channel decoding process is denoted as protocol-assisted channel decoding. Assuming perfect synchronization and available channel state information, the proposed method applied on 802.11a PHY and MAC layers provides a substantial link budget improvement without modifying the standard, while the introduction of an additional interleaver provides additional bit error rate improvements.
... Second, an improved version of an onthe-fly 3S FS automaton [5] is presented in Section III. It combines robust header recovery techniques from [22] with Bayesian hypothesis testing inspired from [6], [13], [14] to localize frame boundaries via a sample-by-sample search. The techniques presented here are quite general. ...
... The constant field k, contains all bits that do not change from frame to frame. It includes the SW indicating the beginning of the frame, if available, and other bits that remain constant [22] once the communication is established. The header is assumed to contain a length field u n , indicating the size of the frame (including the header) in bits λ n . ...
... This requires an error-free decoding of the headers of lower protocol layers, which contain this information. This may be done using methods from [22], which enable lower protocol layers to forward soft information to the layer where it is processed. The main drawback of the proposed FS technique in terms of implementation is the increase in memory requirements for storing the soft information, estimated in [18] to be three to four times that of storing hard bits. ...
Article
Full-text available
In many communication standards, several variable length frames generated by some source coder may be aggregated at a given layer of the protocol stack in the same burst to be transmitted. This decreases the signalization overhead and increases the throughput. However, after a transmission over a noisy channel, Frame Synchronization (FS), i.e., recovery of the aggregated frames, may become difficult due to errors affecting the bursts. This paper proposes several robust FS methods making use of the redundancy present in the protocol stack combined with channel soft information. A trellis-based FS algorithm is proposed first. Its efficiency is obtained at the cost of a large delay, since the whole burst must be available before beginning the processing, which might not be possible in some applications. Thus, a low-delay and reduced-complexity Sliding Window-based variant is introduced. Second, an improved version of an on-the-fly three-state automaton for FS is proposed. Bayesian hypothesis testing is performed to retrieve the correct FS. These methods are compared in the context of the WiMAX MAC layer when bursts are transmitted over Rayleigh fading channels.