Skip navigation

Content

Introduction to RF measurements. Role and significance of measurements in communications.

1.    Role and significance of measurements in the modern communication equipment.

In the production, operation and repair of communication equipment, it is necessary to check and maintain within acceptable limits the technical parameters of lines, channels, electronic equipment of different types, to control their operation, in case of emergency to search for and eliminate damage, to measure individual parts and assemblies, and to make adjustments. When developing new equipment, it is necessary to check the parameters of both individual parts (coils, capacitors, resistors, amplifier stages) and entire devices. In the process of installation of new equipment, it is also necessary to make a number of measurements of individual blocks, units, sections and others, which facilitates and shortens the commissioning of the facility. This activity is precisely the subject of measurements in communication technology. To see the wide range of issues addressed in the measurements, it should be recalled that the frequency range starts from zero hertz (direct current) and reaches tens of gigahertz. The dynamic voltage range is from micro volts to hundreds of kilovolts at different frequencies. The power range is from micro watts to several megawatts. In this case, the same value depending on the frequency range is measured by different methods and often with different instruments. Therefore, both the methods and the measuring instruments are too numerous, in many cases specific only for a given problem.

The tasks of measurements in communications, though too diverse, can be grouped into the following groups:

1. Measurement of the parameters of the individual elements in the equipment;

2. Determining the quantities that characterize the operation of the individual blocks or systems (levels of the individual blocks, attenuation, gain);

3. Recording of different characteristics of the individual blocks, equipment (frequency, transient, phase characteristics, etc.);

4. Measurement of the quantities that characterize the distortions of the transmitted signals;

5. Measurement of different types of interference, noise, parasitic modulation.

All groups are required to increase the accuracy and speed of measurements. This requires the introduction of semi-automation or full automation of measurements.

There are various methods of measurement, but they can all be reduced to the following main groups:

1. Direct methods or methods of direct evaluation. In them, the measured value is read directly from the scale or from the digital device. The method is simple, quick and easy.

2. Indirect methods. In these methods, the required quantity is calculated from other directly measured quantities.

3. Comparative methods or differential methods. In these methods, the unknown quantity is compared with the known one, the comparison being until the two quantities are equalized or the difference between them.

4. Zero methods. They are a special case of comparative methods, when the two quantities are equalized, the bridge methods are typical in this respect. These methods are one of the most accurate.

5. Resonance methods. These methods are relatively simple and use resonant phenomena in a given oscillating system. Their accuracy depends mainly on the resonance curve and can be quite different for each case.

6. Digital methods. These are the most modern methods. They have very high accuracy, speed and lack the subjective factor for reporting the result.

The set of a number of measurements that are performed on one facility, the conditions under which the measurements are performed, their sequence, the equipment used and the measurement methods determine the so-called measurement methodology. When the quality indicators of an equipment are given, it is necessary to give the methodology by which they are measured, because when using another methodology, results can be obtained that differ significantly from the given ones.

2.     System of measurement units.

2.1.         Definition of measurement unit.

A measurement unit is the value of a physical quantity that is accepted for comparison in the quantitative assessment of a quantity of the same kind. In accordance with the given definition, the basic equation of measurement has the following form:  - where Y is the measured quantity; N – the measurement unit and K - the numerical value of the measured quantity at the selected measurement unit. The given equation can be used to record the results of the performed measurement. For example, V, here U is the measured quantity (voltage); V - the measurement unit (volt); 220 - the numerical value of the measured quantity.

2.2.         Unified international system of measurement units (SI).

In October 1960, the XI General Conference on Weights and Measures was held in Paris. It adopted a decision establishing a Unified International System of Units. This conference defines length (meter) and time (second). According to the SI system, one meter is defined as a length equal to 1650763.73 wavelengths existing on the orange line of krypton with atomic weight 86 and emitted in vacuum. The unit of time - second in the SI system is defined as the time for which the electronic shell of the cesium atom with atomic weight 133 performs 9192631770 oscillations. This standard gives an error of the order of 10-11 s. The international designations of the units are obligatory in the scientific and technical literature, as the designations are written in small letters, and those that derive from the names of scientists, the first letter is capitalized. The new SI system includes seven basic, two additional and twenty-seven derived units. The basic units are: for length - meter [m]; for mass - kg [kg]; for time - second [s]; for electric current - ampere [A]; for thermodynamic temperature - degree Kelvin [K]; for light intensity - candela [cd]; for quantity of substance - mol [mol]. The system so far lacks the widely used units such as: kilogram - power; horse power; technical atmosphere; calories; mercury column; bar, etc.

The multiples or parts of the units are formed by multiplying or dividing the units by the number 10 in the respective degree - tera (T) - 1012; giga (G) - 109; mega (M) - 106; kilo (K) - 103; mille (m) - 10-3; micro (μ) - 10-6; nano (n) - 10-9; pico (p) - 10-12. In Bulgaria, the SI system was introduced with BDS 3952-63 from July 1, 1963, and from 1965 it became mandatory for use.

In the communication technology, in addition to the official electrical units included in the SI system, a number of units, concepts and definitions are used, which are recommended by the Interdepartment Radio Advisory Committee (IRAC) and the International Telegraph and Telephone Consultative Committee (CCITT). These are the logarithmic units and the relative representation of the quantities.

A) Relative representation of quantities.

It is very convenient to express a quantity in relative units, as a ratio of two homogeneous quantities, one of which is taken as a basis. For example, the ratio of the voltage at the output of a device to the voltage at its input or vice versa. In this case it is obtained a dimensionless quantity such as: the gain, the

traveling wave; the quality factor and others. Very often a given quantity is expressed as a percentage of another quantity conditionally accepted as basic.

B) Logarithmic units.

The logarithmic ratio of two quantities is very often used in communication technology. This relationship is called "level", and the units with which it is expressed are called transmission units. The term level is defined by a logarithmic relation on the ratio of two homogeneous quantities and is measured in Napier [Np] or in decibels [dB] depending on whether a natural or decimal logarithm is used. Basically, in practice, three levels are used: power, voltage and current, which in turn can be absolute or relative.

Absolute levels are defined by logarithmic relations on the ratio of power, voltage or current (P, U, I) at a given point in the circuit, to power, voltage or current (P0, U0, I0) accepted as basic. The base power P0 is assumed to be equal to 1 mW, and the base voltage and current are calculated depending on the load on which this power is given (600 Ω, 150 Ω, 75 Ω). At a resistance of 600 Ω, these values are V and mA respectively.

Relative levels are defined in a similar way, as comparative quantities are taken as power, voltage or current at another point in the electrical circuit (circuit), most often at the beginning. At arbitrary resistance, the base voltage and current are determined by the expressions:

Absolute power, voltage and current levels are given by the following expressions:

The relative power, voltage and current levels are given in similar terms, for example the relative power level is defined as:

Analog or digital instruments called level gauges are used to measure levels. In fact, these are electronic voltmeters with different input resistances (600 Ω, 150 Ω, 75 Ω and high-impedance input) the scales, on which they are divided into napiers or decibels. They measure the absolute voltage levels, the absolute power levels can be determined by the measured voltage level, if the load resistances on which the powers P and P0  are given are known. The measurement of the absolute voltage level can also be performed indirectly, by measuring the voltage with a voltmeter and then calculated. The logarithmic measure decibel is also used in acoustics. When comparing the sound pressure P1 to the basic P0 by the expression:

sound pressure is obtained in logarithmic units. The basic sound pressure P0 is assumed to be pressure P0 = 2 [N / m2].

Table 1.1 gives the values of U0 and I0 for other values of resistance R0.

Table 1.1

P0 [mW]
R0 [Ω]
U0 [V]
I0 [mA]
1
600
0,755
1,29
1
150
0,388
2,58
1
135
0,366
2,73
1
75
0,274
3,64
2.3.         Standard measuring frequencies and resistances.

A number of operational and production measurements have to be performed under simplified conditions, which requires the introduction of standard (nominal) frequencies and resistances.

A) Standard measurement frequencies.

The standard frequencies in the sound range are the ultra low frequencies (ULF) 800Hz and 1000Hz, at which many measurements are performed, especially to determine the parameters that are weakly dependent on the frequency.

In the high frequency range, 1MHz is used, in telegraph transmissions 25 Hz (50 baud rate), in data transmission 600 Hz (1200 baud rate).

B) Standard resistances.

Devices and circuits with standard characteristic resistances are used in the communication technique, which in turn requires standardization of the output resistances of the measuring generators, the input resistances of a number of measuring devices and the load resistances of the devices.

The resistance 600 W (active) is accepted as the basic standard resistance in the measuring equipment. This is the average characteristic resistance of copper air circuits. For standard resistances the resistors 135 W, 150 W, 180 W (are also used (for systems with symmetrical cables), 75 W (for systems with coaxial cables), 100 W and 300 W (for transmission of messages by transmission lines).

3.     Introduction to error theory.

Each measured quantity has some standard to which it should be compared. However, the comparison is not always direct, in most cases this is done with the help of an auxiliary device or a system of devices. There is always an inaccuracy, i.e. an error is made in this comparison. The following types of errors are distinguished in the measuring technique:

1)    Absolute error.

If the actual value of the measured quantity is X0 and the measured value is X, then the difference between them ΔX is called the absolute error, it is determined according to the expression:

ΔX=X-X0

In practice, the term relative error is more often used, which represents the ratio of ΔX to the measured value. Depending on which value the absolute error refers to, three types of relative errors are defined.

2)    Actual relative error.

This error is determined by the expression:

σD[%]=ΔX/X0*100.

This error indicates what percentage of the measured value may be the error. For example, a voltmeter with a range of 300V when calibrated with a standard shows 100 divisions, and the standard - 97 divisions for this case is obtained:

σD= ΔX/X0=3/97*100=3.1%

3)    Nominal (normal) error.

It shows in percentages what error the device gives when measuring an unknown quantity and is determined by the expression:

σN[%]=ΔX/X*100.

For the above case a nominal error is obtained (3/100) *100 = 3%

4)    Relative given error.

This error is given by the expression:

σG[%]=ΔX/Xmax*100,

where: Xmax - the maximum value that the device can measure.

For the upper case Хmax = 300V. Therefore, the mistake is σG = 3 / 300*100 = 1%. This error determines the accuracy class of the device in this case 1 accuracy class.

According to probability theory, the error can be calculated accurately enough by performing multiple measurements. For each measurement, the absolute error ΔX will be different due to the influence of different factors that affect the measurement. If n measurements are made, n different values ​​for ΔX will be obtained, which in some cases may coincide. If we denote by Y (ΔX) the probability of obtaining an absolute error, then it will be some function of ΔX. This function has the form shown in Fig.1.

This function is called the normal law of distribution of the error, or also known in the literature as the Gaussian law of distribution of the random variable. The reasons that lead to different values ​​of ΔX are different, but experience and statistics show the following:

- Small errors are more common;

- Positive and negative errors are approximately the same;

- As the number of measurements increases, the arithmetic mean of the error decreases.

If we denote by σ the root mean square value of the error, then according to the error theory it is proved that ΔXmax = 3σ.

Fig. 1.

In measuring technology, errors can be summarized in three characteristic groups:

A) Systematic errors.

These errors are due to measuring instruments and methods. For example, improper calibration of the device, parasitic capacities and connections, the influence of temperature, humidity and others. In case of repeated measurements, they either do not change or change according to a certain law. That means they can be evaluated (measured) and the final result can be adjusted accordingly.

B) Random errors.

These errors are due to all sorts of irregular influences, insufficient resolution of the devices, subjective factors, inaccurate adjustment of the equipment and others. Under the same conditions and repeated measurements after additional processing, random errors are reduced.

C) Gross errors.

When in some measurements a result is obtained, which is abrupt deviates from the other results or from the expected result, such an error is called gross and is due to incorrect reading, incorrect setting, improper switching on of the device. These errors are easily noticed and the results of this measurement are not taken into account when processing the final results. When a measuring method is incorrectly selected, when the instrument is switched on incorrectly or when it has inappropriate measurement parameters, the error is called methodical. To avoid such errors, the exact measurement methodology and the appropriate instrument with the appropriate characteristics and parameters must always be selected.