Scheme of the information transmission system. Scheme of information transmission over cellular communication Signal transmission from source to receiver

Schematically, the process of information transfer is shown in the figure. It is assumed that there is a source and a recipient of information. The message from the source to the recipient is transmitted through a communication channel (information channel).

Rice. 3. - Information transfer process

In such a process, information is presented and transmitted in the form of a certain sequence of signals, symbols, signs. For example, during a direct conversation between people, sound signals are transmitted - speech, when reading a text, a person perceives letters - graphic symbols. The transmitted sequence is called a message. From the source to the receiver, the message is transmitted through some material medium (sound - acoustic waves in the atmosphere, image - light electromagnetic waves). If technical means of communication are used in the transmission process, then they are called information channels(information channels). These include telephone, radio, television.

We can say that the human senses play the role of biological information channels. With their help, the information impact on a person is brought to memory.

Claude Shannon, a diagram of the process of transmitting information through technical communication channels was proposed, shown in the figure.

Rice. 4. - Shannon information transfer process

The operation of such a scheme can be explained in the process of talking on the phone. The source of information is talking man. An encoder is a handset microphone that converts sound waves (speech) into electrical signals. The communication channel is the telephone network (wires, switches of telephone nodes through which the signal passes)). The decoding device is the handset (headphone) of the listening person - the receiver of information. Here the incoming electrical signal is converted into sound.

Communication in which the transmission takes place in the form of a continuous electrical signal is called analog communication.

Under coding any transformation of information coming from a source into a form suitable for its transmission over a communication channel is understood.

Currently, digital communication is widely used, when the transmitted information is encoded in binary form (0 and 1 are binary digits), and then decoded into text, image, sound. Digital communication is discrete.

The term "noise" refers to various kinds of interference that distort the transmitted signal and lead to loss of information. Such interferences, first of all, arise for technical reasons: poor quality of communication lines, insecurity from each other of various flows of information transmitted over the same channels. In such cases noise protection is required.

First of all, technical methods are used to protect communication channels from the effects of noise. For example, using a screen cable instead of bare wire; the use of various kinds of filters that separate the useful signal from noise, etc.

Claude Shannon developed a special coding theory that provides methods for dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line must be redundant. Due to this, the loss of some part of the information during transmission can be compensated.

However, the redundancy should not be made too large. This will lead to delays and higher communication costs. The coding theory of K. Shannon just allows you to get such a code that will be optimal. In this case, the redundancy of the transmitted information will be the minimum possible, and the reliability of the received information will be maximum.

In modern digital communication systems, the following technique is often used to combat the loss of information during transmission. The whole message is divided into portions - blocks. For each block, a checksum (the sum of binary digits) is calculated, which is transmitted along with this block. At the place of reception, the checksum of the received block is recalculated, and if it does not match the original, then the transmission of this block is repeated. This will continue until the initial and final checksums match.

Information transfer rate is the information volume of the message transmitted per unit of time. Information flow rate units: bit/s, byte/s, etc.

Technical information communication lines (telephone lines, radio communication, fiber optic cable) have a data rate limit called bandwidth of the information channel. Rate limits are physical in nature.

>>Informatics: Informatics Grade 9. Addendum to Chapter 1

Addendum to Chapter 1

1.1. Transfer of information via technical communication channels

The main topics of the paragraph:

♦ scheme of K. Shannon;
♦ encoding and decoding information;
♦ noise and noise protection. Coding theory by K. Shannon.

K. Shannon's scheme

The American scientist, one of the founders of information theory, Claude Shannon proposed a scheme of the process transmission of information through technical communication channels, shown in Fig. 1.3.

The operation of such a scheme can be explained by the familiar process of talking on the phone. The source of information is the speaking person. The encoding device is the microphone of the handset, with which sound waves(speech) are converted into electrical signals. The communication channel is the telephone network (wires, switches of telephone nodes through which the signal passes). The decoding device is a handset (headphone) of the listening person - the receiver of information. Here the incoming electrical signal is converted into sound.

Communication in which the transmission takes place in the form of a continuous electrical signal is called analog communication.

Encoding and decoding information

Encoding is understood as any transformation of information coming from a source into a form suitable for its transmission over a communication channel.

At the dawn of the radio era, Morse code was used. The text was converted into a sequence of dots and dashes (short and long signals) and broadcast. A person who received such a transmission by ear should have been able to decode the code back into text. Even earlier, Morse code was used in telegraph communications. The transmission of information using Morse code is an example of discrete communication.

At present, digital communication is widely used, when the transmitted information encoded in binary form (0 and 1 are binary digits) and then decoded into text, image, sound. Digital communication, obviously, is also discrete.

Noise and noise protection. Coding theory by K. Shannon

The term "noise" refers to various kinds of interference that distort the transmitted signal and lead to loss of information. Such interference primarily occurs due to technical reasons: poor quality of communication lines, insecurity from each other of various information flows transmitted over the same channels. Often, when talking on the phone, we hear noise, crackling, which make it difficult to understand the interlocutor, or the conversation of other people is superimposed on our conversation. In such cases noise protection is required.

First of all, technical methods are used to protect communication channels from the effects of noise. Such methods are very different, sometimes simple, sometimes very complex. For example, using shielded cable instead of bare wire; the use of various kinds of filters that separate the useful signal from noise, etc.

Claude Shannon developed a special coding theory that provides methods for dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line must be redundant. Due to this, the loss of some part of the information during transmission can be compensated. For example, if you are hard to hear when talking on the phone, then by repeating each word twice, you have a better chance that the interlocutor will understand you correctly.

However, you can not make the redundancy too large. This will lead to delays and higher communication costs. The coding theory of K. Shannon just allows you to get such a code that will be optimal. In this case, the redundancy of the transmitted information will be the minimum possible, and the reliability of the received information will be the maximum.

AT modern systems In digital communications, the following technique is often used to combat the loss of information during transmission. The whole message is divided into portions - packets. For each package, a check is calculated sum(sum of binary digits) that is transmitted with this packet. At the place of reception, the checksum of the received packet is recalculated, and if it does not match the original, then the transmission of this packet is repeated. This happens until the initial and final checksums match.

Briefly about the main

Any technical information transmission system consists of a source, a receiver, encoding and decoding devices, and a communication channel.

Encoding is understood as the transformation of information coming from a source into a form suitable for its transmission over a communication channel. Decoding is the inverse transformation.

Noise is interference that leads to the loss of information.

In coding theory, methods have been developed for representing transmitted information in order to reduce its loss under the influence of noise.

Questions and tasks

1. Name the main elements of the information transfer scheme proposed by K. Shannon.
2. What is encoding and decoding when transmitting information?
3. What is noise? What are its implications for the transmission of information?
4. What are the ways to deal with noise?

1.2. Zipping and unzipping files

The main topics of the paragraph:

♦ data compression problem;
♦ compression algorithm using a variable length code;
♦ compression algorithm using repetition factor;
♦ archiving programs.

Data compression problem

You already know that with the help of the global Internet, the user gets access to huge information resources. On the net you can find a rare book, an essay on almost any topic, photographs and music, computer game and much more. When transferring this data over the network, problems may arise due to its large volume. The capacity of communication channels is still quite limited. Therefore, the transmission time may be too long, and this is associated with additional financial costs. Also, for files big size there may not be enough free disk space.

The solution to the problem is data compression, which reduces the amount of data while retaining the content encoded in it. Programs that perform such compression are called archivers. The first archivers appeared in the mid-1980s of the XX century. The main purpose of their use was to save space on disks, the information volume of which at that time was much less than the volume of modern disks.

Data compression (file archiving) occurs according to special algorithms. These algorithms most often use two fundamentally different ideas.

Compression algorithm using variable length code

First idea: using variable length code. The data being compressed is divided into parts in a special way (strings of characters, “words”). Note that a single character (ASCII code) can also be a “word”. For each “word”, the frequency of occurrence is found: the ratio of the number of repetitions of this “word” to the total number of “words” in the data array. The idea of ​​the information compression algorithm is to encode the most frequently occurring "words" with codes of a shorter length than the rarely occurring "words". This can significantly reduce the size of the file.

This approach has been known for a long time. It is used in Morse code, where characters are encoded by various sequences of dots and dashes, with more frequently occurring characters having shorter codes. For example, the commonly used letter "A" is encoded as: -. A rare letter "Ж" is encoded: -. Unlike codes of the same length, in this case there is a problem of separating letter codes from each other. In Morse code, this problem is solved with the help of a “pause” (space), which, in fact, is the third character of the Morse alphabet, that is, the Morse alphabet is not two, but three characters.

Information in the computer memory is stored using a two-character alphabet. There is no special separator character. And yet, we managed to come up with a way to compress data with a variable length of the “word” code that does not require a separator character. Such an algorithm is called the D. Huffman algorithm (first published in 1952). All universal archivers work on algorithms similar to the Huffman algorithm.

Compression algorithm using repetition factor

Second idea: using a repetition factor. The meaning of the algorithm based on this idea is as follows: if a chain of repeating groups of characters occurs in a compressed data array, then it is replaced by a pair: the number (coefficient) of repetitions - a group of characters. In this case, for long repeating chains, the memory gain during compression can be very large. This method most effective when packing graphic information.

Archiving programs

Archiving programs create archive files (archives). An archive is a file that stores one or more files in compressed form. To use archived files, it is necessary to extract them from the archive - unzip them. All programs archivers usually provide the following features:

Adding files to the archive;
extraction of files from the archive;
deleting files from the archive;
view the contents of the archive.

Currently, the most popular archivers are WinRar and WinZip. WinRar has more features than WinZip. In particular, it makes it possible to create a multi-volume archive (this is convenient if the archive needs to be copied to a floppy disk, and its size exceeds 1.44 MB), as well as the ability to create a self-extracting archive (in this case, the archiver itself is not needed to extract data from the archive) .

Let's give an example of the benefits of using archivers when transferring data over a network. The size of the text document containing the paragraph you are currently reading is 31 KB. If this document is archived using WinRar, then the size of the archive file will be only 6 KB. As they say, the benefit is obvious.

Using archiving programs is very simple. To create an archive, you must first select the files that you want to include in it, then set the necessary parameters (archiving method, archive format, volume size if the archive is multi-volume), and finally issue the CREATE ARCHIVE command. Similarly, the reverse action occurs - extracting files from the archive (unpacking the archive). Firstly, you need to select the files to be extracted from the archive, secondly, determine where these files should be placed, and, finally, issue the EXTRACT FILES FROM THE ARCHIVE command. You will learn more about the work of archiving programs in practical classes.

Briefly about the main

Information is compressed with the help of special archiving programs.

The two most commonly used methods in compression algorithms are the use of a variable length code and the use of a character group repetition factor.

Questions and tasks

1. What is the difference between constant and variable length codes?
2. What are the capabilities of archiving programs?
3. What is the reason for the widespread use of archiving programs?
4. Do you know any other archivers other than those listed in this paragraph?

I. Semakin, L. Zalogova, S. Rusakov, L. Shestakova, Informatics, Grade 9
Submitted by readers from Internet sites

Open informatics lesson, school plan, informatics abstracts, everything for the student to complete homework, download informatics grade 9

Lesson content lesson summary support frame lesson presentation accelerative methods interactive technologies Practice tasks and exercises self-examination workshops, trainings, cases, quests homework discussion questions rhetorical questions from students Illustrations audio, video clips and multimedia photographs, pictures graphics, tables, schemes humor, anecdotes, jokes, comics parables, sayings, crossword puzzles, quotes Add-ons abstracts articles chips for inquisitive cheat sheets textbooks basic and additional glossary of terms other Improving textbooks and lessonscorrecting errors in the textbook updating a fragment in the textbook elements of innovation in the lesson replacing obsolete knowledge with new ones Only for teachers perfect lessons calendar plan for the year guidelines discussion programs Integrated Lessons

If you have corrections or suggestions for this lesson,

What is information

Since the beginning of the 1950s, attempts have been made to use the concept of information (which has not yet had a single definition) to explain and describe a wide variety of phenomena and processes. Some textbooks give the following definition of information:

Information- this is a set of information to be stored, transferred, processed and used in human activities.

Such a definition is not entirely useless, since it helps to at least vaguely imagine what is at stake. But logically it makes no sense. Defined concept ( information) is replaced here by another concept ( collection of information), which itself needs to be defined.

With all the differences in the interpretation of the concept of information, it is indisputable that information is always manifested in material and energy form in the form of signals.

Information presented in a formalized form that allows its processing with the help of technical means is called data .

Information processing is at the heart of solving many problems. To facilitate the processing of information, information systems (IS). IS is called automated, in which technical means, in particular computers, are used. Most of the existing ISs are automated, so for brevity we will simply call them ISs. AT broad sense IS is defined as any information processing system. By Areas of use IS can be divided into systems used in manufacturing, education, healthcare, science, military, social services, commerce and other industries. By objective function IS can be conditionally divided into the following main categories: management, information and reference, decision support. Note that sometimes more narrow interpretation of the concept of IP as a set of hardware and software tools used to solve some applied problem. In an organization, for example, there may be information systems that are assigned the following tasks: accounting for personnel and material and technical means, settlements with suppliers and customers, accounting, etc. Operational efficiency information system(IS) largely depends on its architecture. Currently, the client-server architecture is promising. In a common version, it assumes the presence of a computer network and a distributed database, including a corporate database (CBD) and personal databases (PDB). CBD is located on the server computer, PBD is placed on the computers of employees of departments that are clients of the corporate database. Server a certain resource in a computer network is called a computer (program) that manages this resource. Client - computer (program) using this resource. As a computer network resource, for example, databases, file systems, printing services, mail services can act. The server type is determined by the kind of resource it manages. For example, if the managed resource is a database, then the corresponding server is called the database server . The advantage of organizing an information system according to the client-server architecture is a successful combination of centralized storage, maintenance and collective access to common corporate information with individual user work on personal information. The client-server architecture allows for various implementations.

Information enters the system in the form of messages. Under message understand a set of signs or primary signals, containing information.

Message source generally forms a set source of information (AI) (of the investigated or observed object) and primary converter (PP) (sensor, human operator, etc.), which perceives information about the process taking place in it.

Rice. 1. Structural diagram of a single-channel information transmission system.

Distinguish between discrete and continuous messages.

Discrete messages are formed as a result of the sequential issuance of individual elements by the message source - signs .

Many different signs are called the alphabet of the source of the message , and the number of characters - alphabet volume .

Continuous messages not divided into elements. They are described by continuous functions of time, which take on a continuous set of values ​​(speech, television image).

To transmit a message over a communication channel, a certain signal is assigned to it. The signal is understood physical process A that displays (carries) the message.

The transformation of a message into a signal suitable for transmission over a given communication channel is called coding in the broad sense of the word .

The operation of recovering a message from a received signal is called decoding .

As a rule, they resort to the operation of representing the original characters in a different alphabet with a smaller number of characters, called symbols . When referring to this operation, the same term is used " coding ”, considered in the narrow sense. The device that performs this operation is called an encoder or encoder . Since the alphabet of characters is less than the alphabet of characters, each character corresponds to a certain sequence of characters, which is called code combination .

The number of characters in a code combination is called it significance , the number of non-zero characters - weighing .

For the operation of matching characters with characters of the source alphabet, the term “ decoding". The technical implementation of this operation is carried out by the decoding device or decoder .

Transmitting device converts continuous messages or characters into signals suitable for passing through the communication line. In this case, one or more parameters of the selected signal are changed in accordance with the transmitted information. Such a process is called modulation . It is carried out modulator . The reverse conversion of signals into symbols is performed demodulator

Under communication line understand the medium (air, metal, magnetic tape, etc.) that ensures the flow of signals from the transmitter to the receiver.

The signals at the output of the communication line may differ from the signals at its input (transmitted) due to attenuation, distortion and interference.

interference refers to any interfering disturbances, both external and internal, that cause the received signals to deviate from the transmitted signals.

From a mixture of signal with noise receiving device extracts the signal and, by means of the decoder, reconstructs the message, which in the general case may differ from the one sent. The measure of correspondence between a received message and a sent message is called transmission fidelity .

The received message from the output of the communication system goes to the recipient subscriber, to whom the original information was addressed.

The set of means for transmitting messages is called communication channel .

Ways to represent numbers

Binary(binary) numbers - each digit means the value of one bit (0 or 1), the most significant bit is always written on the left, the letter “b” is placed after the number. For ease of perception, notebooks can be separated by spaces. For example, 1010 0101b.
Hexadecimal (hexadecimal) numbers - each tetrad is represented by one character 0...9, A, B, ..., F. Such a representation can be denoted in different ways, here only the character "h" is used after the last hexadecimal digit. For example, A5h. In program texts, the same number can be denoted both as 0xA5 and 0A5h, depending on the syntax of the programming language. A non-significant zero (0) is added to the left of the most significant hexadecimal digit represented by a letter to distinguish between numbers and symbolic names.
Decimals (decimal) numbers - each byte (word, double word) is represented by an ordinary number, and the sign of the decimal representation (letter "d") is usually omitted. The byte from the previous examples has a decimal value of 165. Unlike binary and hexadecimal notation, decimal is difficult to mentally determine the value of each bit, which sometimes has to be done.
Octal (octal) numbers - each triple of bits (separation starts from the least significant) is written as a number 0-7, at the end the sign "o" is put. The same number would be written as 245o. The octal system is inconvenient in that the byte cannot be divided equally.

To convert a decimal number to the binary system, it must be successively divided by 2 until there is a remainder less than or equal to 1. A number in the binary system is written as a sequence of the last result of the division and the remainder of the division in reverse order.

Example. Convert the number to binary number system.

To convert a decimal number to the octal system, it must be successively divided by 8 until there is a remainder less than or equal to 7. A number in the octal system is written as a sequence of digits of the last result of the division and the remainder of the division in reverse order.

Example.

To convert a decimal number to a hexadecimal system, it must be successively divided by 16 until there is a remainder less than or equal to 15. A number in the hexadecimal system is written as a sequence of digits of the last result of division and the remainder of the division in reverse order.

Example. Convert the number to hexadecimal.

7. To convert a number from binary to octal, it must be divided into triads (triples of digits), starting with the least significant digit, if necessary, supplementing the senior triad with zeros, and replace each triad with the corresponding octal digit (Table 3).

Example. Convert the number to octal number system.

8. To convert a number from the binary system to hexadecimal, it must be divided into tetrads (four digits), starting with the least significant digit, if necessary, supplementing the senior tetrad with zeros, and replacing each tetrad with the corresponding octal digit (Table 3).

Example. Number convert to hexadecimal number system.

Kotelnikov's theorem

In the field of digital signal processing, the Kotelnikov theorem (in the English literature - the Nyquist-Shannon theorem, or the sampling theorem) connects analog and discrete signals and states that if an analog signal has a finite (limited in width) spectrum, then it can be restored uniquely and without loss in its samples taken with a frequency greater than or equal to twice the upper frequency:

This interpretation considers the ideal case when the signal started infinitely long ago and will never end, and also does not have break points in the temporal characteristic. If a signal has discontinuities of any kind in its function of time, then its spectral power does not vanish anywhere. This is exactly what the concept of "a spectrum bounded from above by a finite frequency" implies.

Of course, real signals (for example, sound on a digital medium) do not have such properties, since they are finite in time and usually have discontinuities in the temporal characteristic. Accordingly, the width of their spectrum is infinite. In this case, the complete restoration of the signal is impossible, and, from the Kotelnikov theorem, two consequences follow:

1. Any analog signal can be restored with any accuracy from its discrete readings, taken with a frequency , where is the maximum frequency, which limits the spectrum of the real signal;

2. If the maximum frequency in the signal exceeds half the sampling frequency, then there is no way to restore the signal from discrete to analog without distortion.

More broadly, Kotelnikov's theorem states that a continuous signal can be represented as an interpolation series:

where is the sinc function. Sampling interval

satisfies the constraints

Instantaneous values this row there are discrete samples of the signal.

Although in Western literature the theorem is often called the Nyquist theorem with reference to the 1928 work “Certain topics in telegraph transmission theory”, in this work we are only talking about the required bandwidth of a communication line for transmitting a pulsed signal (the repetition rate must be less than twice the bandwidth). Thus, in the context of the sampling theorem, it is fair to speak only of the Nyquist frequency. Around the same time, Karl Kupfmüller got the same result. The possibility of a complete reconstruction of the original signal from discrete readings is not discussed in these works. The theorem was proposed and proved by V. A. Kotelnikov in 1933 in his work “On the transmission capacity of the ether and wire in telecommunications”, in which, in particular, one of the theorems was formulated as follows: “Any function consisting of frequencies from 0 to , can be transmitted continuously with any precision using numbers following one after another in seconds. Independently of him, Claude Shannon proved this theorem in 1949 (16 years later), therefore in Western literature this theorem is often called Shannon's theorem.

Sampling frequency (or sample rate) - the frequency with which the signal is digitized, stored, processed or converted from analog to digital. The sampling frequency, according to the Kotelnikov Theorem, limits the maximum frequency of the digitized signal to half its value.

The higher the sampling rate, the better the digitization will be. As follows from the Kotelnikov theorem, in order to uniquely restore the original signal, the sampling frequency must exceed the highest required signal frequency by a factor of two.

At the moment, in audio technology of the middle level, the sampling depth is in the range of 10-12 bits. But it is not possible to notice the difference between 10 and 12 bits by ear due to the fact that the human ear is not able to distinguish such small deviations. Another reason for the uselessness is the coefficient of non-linear distortion of the UMZCH and other components of the audio path, which clearly exceeds the quantization step. Higher resolution often has only a marketing meaning and is actually not noticeable by ear.

Digitization(English) digitization) - a description of an object, image or audio-video signal (in analog form) in the form of a set of discrete digital measurements (samples) of this signal / object, using one or another equipment, i.e. converting it into a digital form suitable for recording on electronic carriers.

For digitization, the object is subjected to sampling (in one or more dimensions, for example, in one dimension for sound, in two for a raster image) and analog-to-digital conversion of final levels.

The data array obtained as a result of digitization (“digital representation” of the original object) can be used by a computer for further processing, transmission over digital channels, and saving to a digital medium. Before transmission or storage, the digital representation is usually filtered and encoded to reduce volume.

Sometimes the term "digitization" is used in a figurative sense, as a replacement for the corresponding term [ clarify] , when converting information from analog to digital. For example:

· Digitization of sound.

· Digitize video.

· Digitization of the image.

· Digitization of books - both scanning and (later) recognition.

Digitization of paper maps of the area - means scanning and, as a rule, subsequent vectorization (raster-vector conversion, i.e. transfer to the vector description format).

Sampling

When digitizing a time-based signal, sampling is usually characterized by sample rate- frequency of taking measurements

When scanning an image from physical objects, discretization is characterized by the number of resulting pixels per unit length (for example, the number of dots per inch - English. dot per inch, DPI) for each of the measurements.

In digital photography, sampling is characterized by the number of pixels per frame.

Signal quantization

Discrete signals are created from continuous signals. The process of converting a continuous signal into a discrete one is called signal quantization. The original continuous signal is called the "quantized signal", the signal obtained as a result of quantization is called the "quantized signal". Exist different ways quantization of a continuous signal.

Time Slicing. The quantized signal contains individual values ​​(discretes) of the quantized signal, which are extracted at fixed times. The time quantization process is shown in Fig. 21, where x(t) is the quantized signal, x(t) is the quantized signal.

The signal values ​​are extracted at regular time intervals T, where T is the quantization period (interval). Consequently, the quantized signal will consist of a sequence of discrete samples of the quantized signal, selected at times that are multiples of the quantization period. The quantized signal during time quantization is described by the lattice function of the time of the quantized signal

where m is an integer time argument, m=1,2,3…

Level quantization. When the quantized signal reaches certain fixed levels, the quantized signal is assigned the value of the achieved level, and this value of the quantized signal is stored until the next level is reached by the quantized signals (Fig. 22).

On fig. 22 for the quantized signal x(t) quantization levels are determined with an interval (step) a. The values ​​of the quantized signal x(t) change when the quantized signal reaches the next level. As a result, the quantized signal is a step function of time.

A typical device that performs level quantization is an electromagnetic relay (Fig. 23) containing an electromagnet K and electrical contacts S switched by the electromagnet. The input for the relay is the voltage U on the coil of the electromagnet, and the output is the state of the contacts S. electromagnet, the state of the contacts (closed or open) will change only when the voltage value passes through the actuation level Uav of the relay (the actuation level is the current value at which the electromagnet is activated and switches the relay contacts).

Thus, for a relay, the quantized signal can take only two levels: the S contacts are open, or the S contacts are closed. The state of the contacts is conveniently described as a logical value that takes the value "1" when the contacts are closed, and the value "0" when the contacts are open.

The characteristic of the conversion of the input voltage U into the state of the contacts S for the relay is shown in Fig.23. This is a step characteristic, the level change of which occurs at the input voltage U = U cf. A characteristic of this type is called "relay characteristic". The relay characteristic is one of the cases of non-linear characteristic.

Time and level quantization. In this case, both previous methods are combined, so the quantization method is also called combined. In combined quantization, the quantized signal at predetermined times is assigned the value of the nearest fixed level reached by the quantized signal. This value is retained until the next quantization point.

Graphs of the quantized and quantized signals are shown in fig. 24. On the graph of the quantized signal x(t), dots show the values ​​of the achieved levels closest to the values ​​of the quantized signal at the moment of quantization. Changes in the quantized signal occur at quantization times that are multiples of the quantization period T in time. Thus, the quantized signal will be characterized by the quantization period and the value of the nearest fixed level.

A typical example of a device in which combined quantization takes place is an analog-to-digital converter (ADC) and a digital device built using an analog-to-digital converter. The output information of such devices is updated with a period determined by the duration of the conversion of the input signal into a digital code (time quantization), and the output information is presented with a finite accuracy determined by the quantization resolution or the code bit length for representing the quantized signal.

Sampling frequency(or sample rate, English sample rate) is the frequency of taking samples of a signal continuous in time during its sampling (in particular, by an analog-to-digital converter). Measured in hertz.

The term is also used in the reverse, digital-to-analog conversion, especially if the sampling rate of the direct and inverse conversion is chosen differently (This technique, also called "Time Scaling", is found, for example, in the analysis of ultra-low-frequency sounds emitted by marine animals).

The higher the sampling frequency, the wider the spectrum of the signal can be represented in the discrete signal. As follows from the Kotelnikov theorem, in order to uniquely restore the original signal, the sampling frequency must be more than twice the highest frequency in the signal spectrum.

Some of the audio sampling rates used are:

· 8000 Hz - phone, enough for speech, Nellymoser codec;

12,000 Hz (rare in practice);

· 22 050 Hz - radio;

· 44 100 Hz - used in Audio CD;

· 48 000 Hz - DVD, DAT;

· 96 000 Hz - DVD-Audio (MLP 5.1);

· 192 000 Hz - DVD-Audio (MLP 2.0);

· 2,822,400 Hz - SACD, a single-bit delta-sigma modulation process known as DSD - Direct Stream Digital, jointly developed by Sony and Philips;

· 5,644,800 Hz - Double sample rate DSD, 1-bit Direct Stream Digital with twice the sample rate of SACD. Used in some professional DSD recorders.

Proof

Let's take some. the formula for, , looks like this:

AEP shows that for large enough n, the sequence generated from the source is unreliable in the typical case - , convergent. In the case for large enough: n, (see AEP)

The definition of typical sets implies that those sequences that lie in a typical set satisfy:

Notice that:

The probability that the sequence was obtained from

More than

· since the total population probability is the largest.

· . For proof, use an upper probability bound for each term in the typical case, and a lower bound for the general case.

Starting with bits is enough to distinguish any string

Encryption algorithm: the encoder checks if the incoming sequence is false, if yes, then returns the index of the incoming frequency in the sequence, if not, then returns a random digital number. numerical value. If the input probability is incorrect in the sequence (with a frequency of about ), then the encoder does not generate an error. That is, the probability of error is higher than

Reversibility proof The proof of reversibility is based on the fact that it is required to show that for any sequence of size less than (in the sense of the exponent) will cover the frequency of the sequence bounded by 1.

Proof of the encryption source theorem for character codes[edit | edit source]

Let the length of the word for each possible (). Let's determine where FROM is chosen in such a way that:

where the second line is the Gibbs inequality and the fifth line is the Kraft inequality .

for the second inequality we can set

thus the minimum S satisfies

Topic: Shannon's results and coding problems.

Data compression.

Encoded messages are transmitted over communication channels, stored in memory devices, and processed by the processor. The volumes of data circulating in the ACS are large, and therefore, in many cases, it is important to provide such data encoding, which is characterized by the minimum length of the resulting messages. This is a data compression problem. Its solution provides an increase in the speed of information transfer and a decrease in the required memory of storage devices. Ultimately, this leads to an increase in the efficiency of the data processing system.

There are two approaches (or two steps) to compress data:

Analysis Based Compression specific structure and semantic content of the data;

Compression based on the analysis of the statistical properties of encoded messages. Unlike the first, the second approach is universal and can be used in all situations where there is reason to believe that messages obey probabilistic laws. In the following, we will consider both of these approaches.

4.1. Compression based on the semantic content of the data

These methods are heuristic, unique, but the main idea can be explained as follows. Let the set contain elements. Then, to encode the elements of the set with a uniform code, binary characters are required. In this case, all binary code combinations will be used. If not all combinations are used, the code will be redundant. Thus, in order to reduce redundancy, one should try to delineate the set of possible values ​​of data elements and coding accordingly. In real conditions, this is not always easy, some types of data have a very large power of a set of possible values. Let's see how they do it in specific cases.

Transition from natural notation to more compact ones. Many specific data values ​​are encoded in a human-readable form. However, they usually contain more characters than necessary. For example, the date is written as "January 26, 1982." or in the short form: "26.01.82". however, many code combinations, such as "33.18.53" or "95.00.11", are never used. To compress such data, a day can be encoded with five digits, a month with four, and a year with seven, i.e. the entire date will take no more than two bytes. Another way of writing a date, proposed as far back as the Middle Ages, is to write total number days elapsed so far since some point of reference. In this case, they are often limited to the last four digits of this representation. For example, May 24, 1967 is written as 0000 and counting the days from that date obviously requires two bytes in packed decimal format.

CODING OF INFORMATION.

ABSTRACT ALPHABET

Information is transmitted in the form of messages. Discrete information is written using some finite set of characters, which we will call letters, without putting into this word the usual limited value(such as "Russian letters" or "Latin letters"). A letter in this extended sense is any of the signs that are established by some agreement for communication. For example, in the usual transmission of messages in Russian, such characters will be Russian letters - uppercase and lowercase, punctuation marks, space; if there are numbers in the text, then there are numbers. In general, a letter is an element of some finite set (collection) of distinct characters. The set of characters in which their order is defined will be called an alphabet (the order of characters in the Russian alphabet is generally known: A, B, ..., Z).

Consider some examples of alphabets.

1, Alphabet of uppercase Russian letters:

A B C D E F G I J K L M N O P R S T U V W Y Z

2. Morse alphabet:

3. Alphabet of keyboard symbols for IBM PC (Russified keyboard):

4. Alphabet of signs of a regular six-sided dice:

5. Arabic Numerals Alphabet:

6. Hex Digits Alphabet:

0123456789ABCDEF

This example, in particular, shows that the characters of one alphabet can be formed from the characters of other alphabets.

7. Alphabet of Binary Digits:

Alphabet 7 is one example of the so-called "binary" alphabets, i.e. alphabets consisting of two characters. Other examples are binary alphabets 8 and 9:

8. Binary alphabet "dot," dash ":. _

9. Binary alphabet "plus", "minus": + -

10. Alphabet of capital Latin letters:

ABCDEFGHIJKLMNOPQRSTUVWXYZ

11. Alphabet of the Roman numeral system:

I V X L C D M

12. Alphabet of the language of flowcharts of the image of algorithms:

13. Alphabet of the programming language Pascal (see Chapter 3).
^

ENCODING AND DECODING

In a communication channel, a message composed of characters (letters) of one alphabet can be converted into a message of characters (letters) of another alphabet. The rule that describes the one-to-one correspondence of the letters of the alphabets in such a transformation is called a code. The process of converting a message is called recoding. Such a message transformation can be carried out at the moment the message arrives from the source into the communication channel (coding) and at the moment the message is received by the recipient (decoding). Devices that provide encoding and decoding will be called encoder and decoder, respectively. On fig. 1.5 shows a diagram illustrating the process of transmitting a message in the case of recoding, as well as the effects of interference (see the next paragraph).

Rice. 1.5. The process of sending a message from a source to a receiver

Let's look at some examples of codes.

1. Morse code in the Russian version (the alphabet, composed of the alphabet of Russian capital letters and the alphabet of Arabic numerals, corresponds to the Morse alphabet):

2. Trisime code (combinations of three characters are assigned to the characters of the Latin alphabet: 1,2,3):

BUT 111 D 121 G 131 J211 M221 P231 S311 V321 Y331
AT 112 E 122 H 132 K212 N222 Q232 T312 W322 Z332
FROM 113 F 123 I 133 L213 O223 R233 U313 X323 .333

The Trisime code is an example of a so-called uniform code (one in which all code combinations contain the same number of characters - in this case three). An example of a non-uniform code is Morse code.

3. Encoding of numbers by signs of different number systems, see §3.

THE CONCEPT OF SHANNON'S THEOREMS

It was previously noted that when transmitting messages over communication channels, interference can occur that can lead to distortion of the received characters. So, for example, if you try to transmit a speech message to a person who is at a considerable distance from you in windy weather, then it can be greatly distorted by such interference as the wind. In general, the transmission of messages in the presence of interference is a serious theoretical and practical problem. Its importance is growing in connection with the widespread introduction of computer telecommunications, in which interference is inevitable. When working with coded information distorted by interference, the following main problems can be distinguished: establishing the very fact that information has been distorted; finding out in which particular place of the transmitted text this happened; bug fixes, at least with some degree of certainty.

The specificity of various areas of application of information transmission systems requires a different approach to the implementation of such systems. The system of transmission over telephone communication channels, for example, is completely different from the system of space communication or tropospheric, neither in technical design, nor in terms of parameters. However, there is much in common in the principles of construction and purpose of individual devices of various systems. In the general case, the scheme of the information transmission system is shown in Fig. 2.

It is possible to transmit messages of various physical nature: digital data received from a computer, speech, texts of telegrams, control commands, measurement results of various physical quantities. Naturally, all these messages must first be converted into electrical oscillations that retain all the properties of the original messages, and then unified, i.e., presented in a form convenient for

for subsequent transmission. Under the source of information in Fig. 2 is understood as a device in which all the previously mentioned operations are performed.

For more economical use of the communication line, as well as to reduce the influence of various interferences and distortions, the information transmitted from the source can be further converted using an encoder.

Rice. 2. Block diagram of information transfer.

This transformation, as a rule, consists of a number of operations, including taking into account the statistics of incoming information to eliminate redundancy (statistical coding), as well as introducing additional elements to reduce the effect of noise and distortion (noise-correcting coding).

As a result of a series of transformations, a sequence of elements is formed at the output of the encoder, which, with the help of a transmitter, is converted into a form convenient for transmission over a communication line. A communication line is a medium through which signals are transmitted from a transmitter to a receiver. Accounting for the influence of the environment is necessary. In the theory of information transmission, the concept of "communication channel" is often encountered - this is a set of means that ensure the transmission of signals.

At the input of the receiver, in addition to the signals that have passed through the medium, various interferences also fall. The receiver extracts a sequence from the mixture of signal and noise, which must correspond to the sequence at the output of the encoder. However, due to the action of interference, the influence of the environment, the errors of various transformations, a complete correspondence cannot be obtained. Therefore, such a sequence is input to the decoder, which performs operations to convert it into a sequence corresponding to the transmitted one. The completeness of this correspondence depends on a number of factors: the corrective capabilities of the coded sequence, the level of the signal and interference, as well as their statistics, and the properties of the decoding device. The sequence formed as a result of decoding is sent to the recipient of information. Naturally, when designing information transmission systems, they always strive to ensure such operating conditions that the difference between the information received from the source and the information transmitted to the recipient is small and does not exceed a certain allowable value. In this case, the main indicator of transmission quality is the reliability of information transmission - the degree of correspondence between the received message and the transmitted one.

Today, information is spreading so fast that there is not always enough time to comprehend it. Most people rarely think about how and by what means it is transmitted, and even more so do not imagine the scheme for transmitting information.

Basic concepts

The transfer of information is considered to be the physical process of moving data (signs and symbols) in space. From the point of view of data transmission, this is a pre-planned, technically equipped event for the movement of information units for a set time from the so-called source to the receiver through an information channel, or data transmission channel.

Data transmission channel - a set of means or a data distribution medium. In other words, this is that part of the information transfer scheme that ensures the movement of information from the source to the recipient, and under certain conditions, back.

There are many classifications of data transmission channels. If we highlight the main ones, we can list the following: radio channels, optical, acoustic or wireless, wired.

Technical channels of information transfer

Directly to the technical channels of data transmission are radio channels, fiber optic channels and cable. The cable can be coaxial or twisted pair. The first ones are an electric cable with a copper wire inside, and the second ones are twisted pairs of copper wires, insulated in pairs, located in a dielectric sheath. These cables are quite flexible and easy to use. An optical fiber consists of fiber optic strands that transmit light signals through reflection.

The main characteristics are throughput and noise immunity. Bandwidth is usually understood as the amount of information that can be transmitted over the channel in a certain time. And noise immunity is the parameter of channel stability to the effects of external interference (noise).

Understanding Data Transfer

If you do not specify the scope, the general information transfer scheme looks simple, includes three components: "source", "receiver" and "transmission channel".

Shannon's scheme

Claude Shannon, an American mathematician and engineer, stood at the origins of information theory. He proposed a scheme for transmitting information through technical communication channels.

It is easy to understand this diagram. Especially if you imagine its elements in the form of familiar objects and phenomena. For example, the source of information is a person talking on the phone. The handset will be an encoder that converts speech or sound waves into electrical signals. The data transmission channel in this case is communication nodes, in general, the entire telephone network leading from one telephone set to another. The subscriber's handset acts as the decoding device. It converts the electrical signal back into sound, i.e. into speech.

In this diagram of the information transfer process, data is represented as a continuous electrical signal. Such a connection is called analog.

The concept of coding

Coding is considered to be the transformation of information sent by the source into a form suitable for transmission over the communication channel used. The most understandable example of coding is Morse code. In it, information is converted into a sequence of dots and dashes, that is, short and long signals. The receiving party must decode this sequence.

AT modern technologies using digital communications. In it, information is converted (encoded) into binary data, that is, 0 and 1. There is even a binary alphabet. Such a connection is called discrete.

Interference in information channels

Noise is also present in the data transmission scheme. The concept of "noise" in this case means interference, due to which the signal is distorted and, as a result, its loss. The reasons for interference can be different. For example, information channels may be poorly protected from each other. To prevent interference, various technical protection methods, filters, shielding, etc. are used.

K. Shannon developed and proposed for use the coding theory to combat noise. The idea is that if information is lost under the influence of noise, then the transmitted data should be redundant, but at the same time not so much as to reduce the transmission rate.

AT digital channels communication information is divided into parts - packets, for each of which a checksum is calculated. This amount is transmitted along with each packet. The information receiver recalculates this sum and accepts the packet only if it matches the original one. Otherwise, the packet is sent again. And so on until the sent and received checksums match.