4-ttcn3_language_extensions.adoc 367 KB
Newer Older
Elemer Lelik's avatar
Elemer Lelik committed
1
2
3
4
5
6
7
8
9
10
11
12
13
[[ttcn-3-language-extensions]]
= TTCN–3 Language Extensions
:toc:
:table-number: 3

The Test Executor supports the following non-standard additions to TTCN–3 Core Language in order to improve its usability or provide backward compatibility with older versions.

== Syntax Extensions

The compiler does not report an error or warning if the semi-colon is missing at the end of a TTCN–3 definition although the definition does not end with a closing bracket.

The statement block is optional after the guard operations of `altsteps`, `alt` and `interleave` constructs and in the response and exception handling part of `call` statements. A missing statement block has the same meaning as an empty statement block. If the statement block is omitted, a terminating semi-colon must be present after the guard statement.

14
The standard escape sequences of C/{cpp} programming languages are recognized and accepted in TTCN–3 character string values, that is, in literal values of `charstring` and `universal` `charstring` types, as well as in the arguments of built-in operations `log()` and `action()`.
Elemer Lelik's avatar
Elemer Lelik committed
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207

NOTE: As a consequence of the extended escape sequences and in contrast with the TTCN–3 standard, the backslash character itself has to be always duplicated within character string values.

The following table summarizes all supported escape sequences of TTCN–3 character string values:

.Character string escape sequences
[cols=",,",options="header",]
|===
|*Escape sequence* |*Character code (decimal)* |*Meaning*
| |7 |bell
| |8 |backspace
| |12 |new page
| |10 |line feed
| |13 |carriage return
| |9 |horizontal tabulator
| 11 |vertical tabulator |
|\ |92 |backslash
|" |34 |quotation mark
|’ |39 |apostrophe
|? |63 |question mark
| <newline> |nothing |line continuation
| |NNN |octal notation (NNN is the character code in at most 3 octal digits)
| |NN |hexadecimal notation (NN is the character code in at most 2 hexadecimal digits)
|"" |34 |quotation mark (standard notation of TTCN–3 )
|===

NOTE: Only the standardized escape sequences are recognized in matching patterns of character string templates because they have special meaning there. For example, inside string patterns `\n` denotes a set of characters rather than a single character.

Although the standard requires that characters of TTCN–3 `charstring` values must be between 0 and 127, TITAN allows characters between 0 and 255. The printable representation of characters with code 128 to 255 is undefined.

The compiler implements an ASN.1-like scoping for TTCN–3 enumerated types, which means it allows the re-use of the enumerated values as identifiers of other definitions. The enumerated values are recognized only in contexts where enumerated values are expected; otherwise the identifiers are treated as simple references. However, using identifiers this way may cause misleading error messages and complicated debugging.

The compiler allows the local definitions (constants, variables, timers) to be placed in the middle of statement blocks, that is, after other behavior statements. The scope of such definitions extends from the statement following the definition to the end of the statement block. Forward-referencing of local definitions and jumping forward across them using `goto` statements are not allowed.

The compiler accepts in-line compound values in the operands of TTCN–3 expressions although the BNF of the standard allows only single values. The only meaningful use of the compound operands is with the comparison operators, that is, == and !=. Two in-line compound values cannot be compared with each other because their types are unknown; at least one operand of the comparison must be a referenced value. This feature has a limitation: In the places where in-line compound templates are otherwise accepted by the syntax (e.g. in the right-hand side of a variable assignment or in the actual parameter of a function call) the referenced value shall be used as the left operand of the comparison. Otherwise the parser gets confused when seeing the comparison operator after the compound value.

Examples:
[source]
----
// invalid since neither of the operands is of known type
if ({ 1, 2 } == { 2, 1 }) { }

// both are valid
while (v_myRecord == { 1, omit }) { }
if ({ f1 :=1, f2 := omit } != v_mySet) {}

// rejected because cannot be parsed
v_myBooleanFlag := { 1, 2, 3 } == v_myRecordOf;
f_myFunctionTakingBoolean({ 1, 2, 3 } != v_mySetOf);

// in reverse order these are allowed
v_myBooleanFlag := v_myRecordOf == { 1, 2, 3 };
f_myFunctionTakingBoolean(v_mySetOf != { 1, 2, 3 });
----

[[visibility-modifiers]]
== Visibility Modifiers

TITAN defines 3 visibility modifiers for module level definitions, and component member definitions: public, private, friend (8.2.5 in <<13-references.adoc#_1, [1]>>).

On module level definitions they mean the following:

* The public modifier means that the definition is visible in every module importing its module.
* The private modifier means that the definition is only visible within the same module.
* The friend modifier means that the definition is only visible within modules that the actual module declared as a friend module.

If no visibility modifier is provided, the default is the public modifier.

In component member definitions they mean the followings:

* The public modifier means that any function/testcase/altstep running on that component can access the member definition directly.
* The private modifier means that only those functions/testcases/altsteps can access the definition which runs on the component type directly. If they run on a component type extending the one containing the definition, it will not be directly visible.

The friend modifier is not available within component types.

Example:
[source]
----
module module1
{
import from module2 all;
import from module3 all;
import from module4 all;

const module2Type akarmi1 := 1; //OK, type is implicitly public
const module2TypePublic akarmi2 := 2; //OK, type is explicitly public
const module2TypeFriend akarmi3 := 3; //OK, module1 is friend of module2
const module2TypePrivate akarmi4 := 4; //NOK, module2TypePrivate is private to module2

const module3Type akarmi5 := 5; //OK, type is implicitly public
const module3TypePublic akarmi6 := 6; //OK, type is explicitly public
const module3TypeFriend akarmi7 := 7; //NOK, module1 is NOT a friend of module3
const module3TypePrivate akarmi8 := 8; //NOK, module2TypePrivate is private to module2

type component User_CT extends Lib4_CT {};
function f_set3_Lib4_1() runs on User_CT { v_Lib4_1 := 0 } //OK
function f_set3_Lib4_2() runs on User_CT { v_Lib4_2 := 0 } //OK
function f_set3_Lib4_3() runs on User_CT { v_Lib4_3 := 0 } //NOK, v_Lib4_3 is private
}

module module2
{

friend module module1;

type integer module2Type;
public type integer module2TypePublic;
friend type integer module2TypeFriend;
private type integer module2TypePrivate;
} // end of module

module module3
{
type integer module3Type;
public type integer module3TypePublic;
friend type integer module3TypeFriend;
private type integer module3TypePrivate;
} // end of module

module module4 {
type component Lib4_CT {
var integer v_Lib4_1;
public var integer v_Lib4_2;
private var integer v_Lib4_3;
}
----

== The `anytype`

The special TTCN-3 type `anytype` is defined as shorthand for the union of all known data types and the address type (if defined) in a TTCN-3 module. This would result in a large amount of code having to be generated for the `anytype`, even if it is not actually used. For performance reasons, Titan only generates this code if a variable of `anytype` is declared or used, and does not create fields in the `anytype` for all data types. Instead, the user has to specify which types are needed as `anytype` fields with an extension attribute at module scope.

Examples:

[source]
----
module elsewhere {
  type float money;
  type charstring greeting;
  }
  module local {
    import from elsewhere all;
    type integer money;
type record MyRec {
  integer i,
  float f
}

control {
  var anytype v_any;
  v_any.integer := 3;
  // ischosen(v_any.integer) == true

 v_any.charstring := "three";
 // ischosen(v_any.charstring) == true

 v_any.greeting := "hello";
 // ischosen(v_any.charstring) == false
 // ischosen(v_any.greeting) == true

 v_any.MyRec := { i := 42, f := 0.5 }
 // ischosen(v_any.MyRec) == true

 v_any.integer := v_any.MyRec.i – 2;
 // back to ischosen(v_any.integer) == true v_any.money := 0;
 // local money i.e. integer
 // not elsewhere.money (float)
 // ischosen(v_any.integer) == false
 // ischosen(v_any.money) == true

 // error: no such field (not added explicitly)
 // v_any.float := 3.1;

 // error: v_any.elsewhere.money
 }
}

with {

extension "anytype integer, charstring" // adds two fields
extension "anytype MyRec" // adds a third field
extension "anytype money" // adds the local money type
//not allowed: extension "anytype elsewhere.money"
extension "anytype greeting" // adds the imported type}
----

In the above example, the `anytype` behaves as a union with five fields named "integer", "charstring", "MyRec", "money" and "greeting". The anytype extension attributes are cumulative; the effect is the same as if a single extension attribute contained all five types.

NOTE: Field "greeting" of type charstring is distinct from the field "charstring" even though they have the same type (same for "integer" and "money").

Types imported from another module (elsewhere) can be added to the anytype of the importing module (local) if the type can be accessed with its unqualified name, which requires that it does not clash with any local type. In the example, the imported type "greeting" can be added to the anytype of module local, but "money" (a float) clashes with the local type "money" (an integer). To use the imported "money", it has to be qualified with its module name, for example a variable of type elsewhere.money can be declared, but elsewhere.money can not be used as an anytype field.

== Ports and Test Configurations

208
If all instances of a TTCN–3 port type are intended to be used for internal communication only (i.e. between two TTCN–3 test components) the generation and linking of an empty Test Port skeleton can be avoided. If the attribute `with { extension "internal" }` is appended to the port type definition, all {cpp} code that is needed for this port will be included in the output modules.<<13-references.adoc#_9, [9]>>
Elemer Lelik's avatar
Elemer Lelik committed
209

210
If the user wants to use `address` values in `to` and `from` clause and sender redirect of TTCN–3 port operations the `with { extension "address" }` attribute shall be used in the corresponding port type definition(s) to generate proper {cpp} code.
Elemer Lelik's avatar
Elemer Lelik committed
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268

NOTE: When address is used in port operations the corresponding port must have an active mapping to a port of the test system interface, otherwise the operation will fail at runtime. Using of address values in to and from clauses implicitly means system as component reference. (See section "Support of address type" in <<13-references.adoc#_16, [16]>> for more details).<<13-references.adoc#_10, [10]>>

Unlike the latest TTCN–3 standard, our run time environment allows to connect a TTCN–3 port to more than one ports of the same remote test component. When these connections persist (usually in transient states), only receiving is allowed from that remote test component, because the destination cannot be specified unambiguously in the `to` clause of the `send` operation. Similarly, it is allowed to map a TTCN–3 port to more than one ports of the system, although it is not possible to send messages to the SUT.

[[parameters-of-create-operation]]
== Parameters of create Operation

The built-in TTCN–3 `create` operation can take a second, optional argument in the parentheses. The first argument, which is the part of the standard, can assign a name to the newly created test component. The optional, non-standard second argument specifies the location of the component. Also the second argument is a value or expression of type `charstring`.

According to the standard the component name is a user-defined attribute for a test component, which can be an arbitrary string value containing any kind of characters including whitespace. It is not necessary to assign a unique name for each test component; several active test components can have the same name at the same time. The component name is not an identifier; it cannot be used to address test components in configuration operations as component references can. The name can be assigned only at component creation and it cannot be changed later.

Component name is useful for the following purposes:

* it appears in the printout when logging the corresponding component reference;
* it can be incorporated in the name of the log file (see the metacharacter `%n`);
* it can be used to identify the test component in the configuration file (when specifying test port parameters (see section <<7-the_run-time_configuration_file.adoc#logging, `[LOGGING]`>>), component location constraints (see section <<7-the_run-time_configuration_file.adoc#components-parallel-mode, [COMPONENTS] (Parallel mode)>>) and logging options (see sections <<7-the_run-time_configuration_file.adoc#filemask, `FileMask`>> and <<7-the_run-time_configuration_file.adoc#consolemask, `ConsoleMask`>>).

Specifying the component location is useful when performing distributed test execution. The value used as location must be a host name, a fully qualified domain name, an IP address or the name of a host group defined in the configuration file (see section <<7-the_run-time_configuration_file.adoc#groups-parallel-mode, [GROUPS] (Parallel mode)>>). The explicit specification of the location overrides the location constraints given in the configuration file (see section <<7-the_run-time_configuration_file.adoc#components-parallel-mode, [COMPONENTS] (Parallel mode)>> for detailed description). If no suitable and available host is found the `create` operation fails with a dynamic test case error.

If only the component name is to be specified, the second argument may be omitted. If only the component location is specified a `NotUsedSymbol` shall be given in the place of the component name.

Examples:

[source]
----
//create operation without arguments
var MyCompType v_myCompRef := MyCompType.create;

// component name is assigned
v_myCompRef := MyCompType.create("myCompName");

// component name is calculated dynamically
v_myCompArray[i] := MyCompType.create("myName" & int2str(i));

// both name and location are specified (non-standard notation)
v_myCompRef := MyCompType.create("myName", "heintel");

// only the location is specified (non-standard notation)
v_myCompRef := MyCompType.create(-, "159.107.198.97") alive;
----

== Altsteps and Defaults

According to the TTCN–3 standard an `altstep` can be activated as `default` only if all of its value parameters are `in` parameters. However, our compiler and run-time environment allows the activation of altsteps with `out` or `inout` value or template parameters as well. In this case the actual parameters of the activated `default` shall be the references of variables or template variables that are defined in the respective component type. This restriction is in accordance with the rules of the standard about timer parameters of activated defaults.

NOTE: Passing local variables or timers to defaults is forbidden because the lifespan of local definitions might be shorter than the `default` itself, which might lead to unpredictable behavior if the `default` is called after leaving the statement block that the local variable is defined in. Since ports can be defined only in component types, there is no restriction about the `port` parameters of `altsteps`. These restrictions are not applicable to direct invocations of `altsteps` (e.g. in `alt` constructs).

The compiler allows using a statement block after `altstep` instances within `alt` statements. The statement block is executed if the corresponding `altstep` instance was chosen during the evaluation of the alt statement and the `altstep` has finished without reaching a `repeat` or `stop` statement. This language feature makes the conversion of TTCN–2 test suites easier.

NOTE: This construct is valid according to the TTCN–3 BNF syntax, but its semantics are not mentioned anywhere in the standard text.

The compiler accepts `altsteps` containing only an `[else]` branch. This is not allowed by the BNF as every `altstep` must have at least one regular branch (which can be either a guard statement or an `altstep` instance). This construct is practically useful if the corresponding `altstep` is instantiated as the last branch of the alternative.

== Interleave Statements

The compiler realizes TTCN–3 `interleave` statements using a different approach than it is described in section 7.5 of <<13-references.adoc#_1, [1]>>. The externally visible behavior of the generated code is equivalent to that of the canonical mapping, but our algorithm has the following advantages:

Kristof Szabados's avatar
Kristof Szabados committed
269
* Loop constructs `for`, `while` and `do-while` loops are accepted and supported without any restriction in `interleave` statements. The transformation of statements is done in a lower level than the TTCN–3 language, which does not restrict the embedded loops.
Elemer Lelik's avatar
Elemer Lelik committed
270
* Statements `activate`, `deactivate` and `stop` can also be used within `interleave`. The execution of these statements is atomic so we did not see the reason why the standard forbids them.
271
* The size of our generated code is linear in contrast to the exponential code growth of the canonical algorithm. In other words, the {cpp} equivalent of every embedded statement appears exactly once in the output.
Elemer Lelik's avatar
Elemer Lelik committed
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
* The run-time realization does not require any extra operating system resources, such as multi-threading.

== Logging Disambiguation

The TTCN–3 log statement provides the means to write logging information to a file or display on console (standard error). Options <<7-the_run-time_configuration_file.adoc#filemask, `FileMask`>> and <<7-the_run-time_configuration_file.adoc#consolemask, `ConsoleMask`>> determine which events will appear in the file and on the console, respectively. The generated logging messages are of type `USER_UNQUALIFIED`.

The `log` statement accepts among others fixed character strings TTCN–3 constants, variables, timers, functions, templates and expressions; for a complete list please refer to the table 18 in <<13-references.adoc#_1, [1]>>. It is allowed to pass multiple arguments to a single `log` statement, separated by commas.

The TTCN-3 standard does not specify how logging information should be presented. The following sections describe how TITAN implemented logging.

The arguments of the TTCN-3 statement `action` are handled according to the same rules as `log`.

=== Literal Free Text String

Strings entered between quotation marks (") <<13-references.adoc#_11, [11]>> and the results of special macros given in section <<ttcn3-macros, TTCN-3 Macros>> in the argument of the `log` statement are verbatim copied to the log. The escape sequences given in Table 4 are interpreted and the resulting non-printable characters (such as newlines, tabulators, etc.) will influence the printout.

Example:

[source]
----
log("foo");//The log printout will look like this:
 12:34:56.123456 foo
 bar
----

=== TTCN-3 Values and Templates

Literal values, referenced values or templates, wildcards, compound values, in-line (modified) templates, etc. (as long as the type of the expression is unambiguous) are discussed in this section.

These values are printed into the log using TTCN-3 Core Language syntax so that the printout can be simply copied into a TTCN-3 module to initialize an appropriate constant/variable/template, etc.

In case of (`universal`) `charstring` values the delimiter quotation marks ("") are printed and the embedded non-printable characters are substituted with the escape sequences in the first 9 rows of Table 4. All other non-printable characters are displayed in the TTCN-3 quadruple notation.

If the argument refers to a constant of type `charstring`, the actual value is not substituted to yield a literal string.

Example:

[source]
----
const charstring c_string := "foo\000";
log(c_string);
//The log printout will look like this:
12:34:56.123456 "foo" & char(0, 0, 0, 0)
----

=== Built-in Function match()

For the built-in `match()` function the printout will contain the detailed matching process field-by-field (similarly to the failed `receive` statements) instead of the Boolean result.

This rule is applied only if the` match()` operation is the top-level expression to be logged, see the example below:

[source]
----
 // this will print the detailed matching process
log(match(v_myvalue, t_template));
 // this will print only a Boolean value (true or false)
log(not not match(v_myvalue, t_template));
----
All the other predefined and user-defined functions with actual arguments will print the return value of the function into the log according to the TTCN-3 standard.

=== Special TTCN-3 Objects

If the argument refers to a TTCN-3 `port`, `timer` or array (slice) of the above, then the actual properties of the TTCN-3 object is printed into the log.

For ports the name and the state of the port is printed.

In case of timers the name of the timer, the default duration, the current state (`inactive`, `started` or `expired`), the actual duration and the elapsed time (if applicable) is printed in a structured form.

== Value Returning done

The compiler allows starting TTCN–3 functions having return type on PTCs. Those functions must have the appropriate `runs on` clause. If such a function terminates normally on the PTC, the returned value can be matched and retrieved in a `done` operation.

According to the TTCN-3 standard, the value redirect in a `done` operation can only be used to store the local verdict on the PTC that executed the behavior function. In TITAN the value redirect can also be used to store the behavior function’s return value with the help of an optional template argument.

If this template argument is present, then the compiler treats it as a value returning done operation, otherwise it is treated as a verdict returning `done`.

The following rules apply to the optional template argument and the value redirect:

* The syntax of the template and value redirect is identical with that of the `receive` operation.
* If the template is present, then the type of the template and the variable used in the value redirect shall be identical. If the template is not present, then the type of the value redirect must be `verdicttype`.
* In case of a value returning done the return type shall be a TTCN–3 type marked with the following attribute: `with { extension "done" }`. It is allowed to mark and use several types in done statements within one test suite. If the type to be used is defined in ASN.1 then a type alias shall be added to one of the TTCN–3 modules with the above attribute.
* In case of a value returning done the type of the template or variable must be visible from the module where the `done` statement is used.
* Only those done statements can have a template or a value redirect that refer to a specific PTC component reference. That is, it is not allowed to use this construct with `any component.done` or `all component.done`.

A value returning `done` statement is successful if all the conditions below are fulfilled:

* The corresponding PTC has terminated.
* The function that was started on the PTC has terminated normally. That is, the PTC was stopped neither by itself nor by other component and no dynamic test case error occurred.
* The return type of the function that was started on the PTC is identical to the type of the template used in the `done` statement.
* The value returned by the function on the PTC matches the given template.

If the `done` operation was successful and the value redirect is present the value returned by the PTC (if there was a matching template), or the local verdict on the PTC (if there was no matching template) is stored in the given variable or variable field.

The returned value can be retrieved from `alive` PTCs, too. In this case the `done` operation always refers to the return value of the lastly started behavior function of the PTC. Starting a new function on the PTC discards the return value of the previous function automatically (i.e. it cannot be retrieved or matched after the start component operation anymore).

Example:

[source]
----
type integer MyReturnType with { extension "done" };

function ptcBehavior() runs on MyCompType return MyReturnType
{
  setverdict(inconc);
  return 123;
}

// value returning ‘done’
testcase myTestCase() runs on AnotherCompType
{
  var MyReturnType myVar;
  var MyCompType ptc := MyCompType.create;
  ptc.start(ptcBehavior());
  ptc.done(MyReturnType : ?) -> value myVar;
  // myVar will contain 123
}

// verdict returning ‘done’
testcase myTestCase2() runs on AnotherCompType
{
  var verdicttype myVar;
  var MyCompType ptc := MyCompType.create;
  ptc.start(ptcBehavior());
  ptc.done -> value myVar;
  // myVar will contain inconc
}
----

== Dynamic Templates

Dynamic templates (template variables, functions returning templates and passing template variables by reference) are now parts of the TTCN–3 Core Language standard (<<13-references.adoc#_1, [1]>>). These constructs have been added to the standard with the same syntax and semantics as they were supported in this Test Executor. Thus dynamic templates are not considered language extensions anymore.

However, there is one extension compared to the supported version of Core Language. Unlike the standard, the compiler and the run-time environment allow the external functions to return templates.

Example:

[source]
----
// this is not valid according to the standard
external function MyExtFunction() return template octetstring;
----

== Template Module Parameters

The compiler accepts template module parameters by inserting an optional "template" keyword into the standard modulepar syntax construct between the modulepar keyword and the type reference. The extended BNF rule:

[source,subs="+quotes"]
ModuleParDef ::= "modulepar" (ModulePar | (“{“MultiTypedModuleParList "}"))ModulePar ::= *["template"]* Type ModuleParList

Example:

[source]
----
modulepar template charstring mp_tstr1 := ( "a" .. "f") ifpresent
modulepar template integer mp_tint := complement (1,2,3)
----

== Predefined Functions

The built-in predefined functions `ispresent`, `ischosen`, `lengthof` and `sizeof` are applicable not only to value-like language elements (constants, variables, etc.), but template-like entities (templates, template variables, template parameters) as well. If the function is allowed to be called on a value of a given type it is also allowed to be called on a template of that type with the meaning described in the following subchapters.

NOTE: "dynamic test case error" does not necessarily denote here an error situation: it may well be a regular outcome of the function.

=== `sizeof`

The function `sizeof` is applicable to templates of `record`, `set`, `record` of, `set` `of` and `objid` types. The function is applicable only if the `sizeof` function gives the same result on all values that match the template.<<13-references.adoc#_12, [12]>> In case of `record of` and `set of` types the length restrictions are also considered. Dynamic test case error occurs if the template can match values with different sizes or the length restriction contradicts the number of elements in the template body.

Examples:

[source]
----
type record of integer R;
type set S { integer f1, bitstring f2 optional, charstring f3 optional }
template R tr_1 := { 1, permutation(2, 3), ? }
template R tr_2 := {1, *, (2, 3) }
template R tr_3 := { 1, *, 10 } length(5)
template R tr_4 := { 1, 2, 3, * } length(1..2)
template S tr_5 := { f1 := (0..99), f2 := omit, f3 := ? }
template S tr_6 := { f3 := *, f1 := 1, f2 := ’00’B ifpresent }
template S tr_7 := ({ f1 := 1, f2 := omit, f3 := "ABC" },
                  { f1 := 2, f3 := omit, f2 := ’1’B })
template S tr_8 := ?

//sizeof(tr_1) → 4
//sizeof(tr_2) → error
//sizeof(tr_3) → 5
//sizeof(tr_4) → error
//sizeof(tr_5) → 2
//sizeof(tr_6) → error
//sizeof(tr_7) → 2
//sizeof(tr_8) → error
----

=== `ispresent`

The predefined function `ispresent` has been extended; its parameter can now be any valid TemplateInstance. It is working according to the following ETSI CRs: http://forge.etsi.org/mantis/view.php?id=5934 and http://forge.etsi.org/mantis/view.php?id=5936.

=== `oct2unichar`

The function `oct2unichar` (`in octetstring invalue`, `in charstring string_encoding := "UTF-8"`) `return universal charstring` converts an octetstring `invalue` to a universal charstring by use of the given `string_encoding`. The octets are interpreted as mandated by the standardized mapping associated with the given `string_encoding` and the resulting characters are appended to the returned value. If the optional `string_encoding` parameter is omitted, the default value "UTF-8".

The following values are allowed as `string_encoding` actual parameters: `UTF8`, `UTF-16`, `UTF-16BE`, `UTF-16LE`, `UTF-32`, `UTF-32BE`, `UTF-32LE`.

DTE occurs if the `invalue` does not conform to UTF standards. The `oct2unichar` checks if the Byte Order Mark (BOM) is present. If not a warning will be appended to the log file. `oct2unichar` will `decode` the invalue even in absence of the BOM.

Any code unit greater than 0x10FFFF is ill-formed.

UTF-32 code units in the range of 0x0000D800 – 0x0000DFFF are ill-formed.

UTF-16 code units in the range of 0xD800 – 0xDFFF are ill-formed.

UTF-8 code units in the range of 0xD800 – 0xDFFF are ill-formed.

Example:
----
oct2unichar('C384C396C39CC3A4C3B6C3BC'O)="ÄÖÜäöü";oct2unichar('00C400D600DC00E400F600FC'O,"UTF-16LE") = "ÄÖÜäöü";
----

=== `unichar2oct`

The function `unichar2oct` (`in universal charstring invalue, in charstring string_encoding := "UTF-8"`) `return octetstring` converts a universal charstring `invalue` to an octetstring. Each octet of the octetstring will contain the octets mandated by mapping the characters of `invalue` using the standardized mapping associated with the given `string_encoding` in the same order as the characters appear in inpar. If the optional `string_encoding` parameter is omitted, the default encoding is "UTF-8".

The following values are allowed as `string_encoding` actual parameters: UTF-8, UTF-8 BOM, UTF-16, UTF-16BE, UTF-16LE, UTF-32, UTF-32BE, UTF-32LE.

The function `unichar2oct` adds the Byte Order Mark (BOM) to the beginning of the `octetstring` in case of `UTF-16` and `UTF-32` encodings. The `remove_bom` function helps to remove it, if it is not needed. The presence of the BOM is expected at the inverse function `oct2unichar` because the coding type (without the BOM) can be detected only in case of `UTF-8` encoded `octetstring`. By default UTF-8 encoding does not add the BOM to the `octetstring`, however `UTF-8` `BOM` encoding can be used to add it.

DTE occurs if the `invalue` does not conform to UTF standards.

Any code unit greater than 0x10FFFF is ill-formed.

Example:

[source]
----
unichar2oct("ÄÖÜäöü") = 'EFBBBFC384C396C39CC3A4C3B6C3BC'O;
unichar2oct("ÄÖÜäöü","UTF-16LE") = 'FFFE00C400D600DC00E400F600FC'O;
----

[[get-stringencoding]]
=== `get_stringencoding`

The function `get_stringencoding (in octetstring encoded_value) return charstring` identifies the encoding of the `encoded_value`. The following return values are allowed as charstring: ASCII, UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, UTF-32LE.

If the type of encoding could not been identified, it returns the value: <unknown>

Example:

[source]
----
var octetstring invalue := 'EFBBBFC384C396C39CC3A4C3B6C3BC'O;
var charstring codingtype := get_stringencoding(invalue);
the resulting codingtype is "UTF-8"
----

[[remove-bom]]
=== `remove_bom`

The function `remove_bom (in octetstring encoded_value) return octetstring` strips the BOM if it is present and returns the original octetstring otherwise.

Example:

[source]
----
var octetstring invalue := 'EFBBBFC384C396C39CC3A4C3B6C3BC'O;
var octetstring nobom := remove_bom(invalue);
the resulting nobom contains: 'C384C396C39CC3A4C3B6C3BC'O;
----

== Additional Predefined Functions

In addition to standardized TTCN–3 predefined functions given in Annex C of <<13-references.adoc#_1, [1]>> and Annex B of <<13-references.adoc#_3, [3]>> the following built-in conversion functions are supported by our compiler and run-time environment:

=== `str2bit`

The function `str2bit (charstring value) return bitstring` converts a `charstring` value to a `bitstring`, where each character represents the value of one bit in the resulting bitstring. Its argument may contain the characters "0" or "1" only, otherwise the result is a dynamic test case error.

NOTE: This function is the reverse of the standardized `bit2str`.

Example:

[source]
str2bit ("1011011100") = ’1011011100’B

=== `str2hex`

The function `str2hex (charstring value)` `return hexstring` converts a `charstring` value to a `hexstring`, where each character in the character string represents the value of one hexadecimal digit in the resulting `hexstring`. The incoming character string may contain any number of characters. A dynamic test case error occurs if one or more characters of the charstring are outside the ranges "0" .. "9", "A" .. "F" and "a" .. "f".

NOTE: This function is the reverse of the standardized `hex2str`.

Example:

[source]
----
str2hex ("1D7") = ’1D7’H
----

=== float2str

The function `float2str (float value) return charstring` converts a `float` value to a `charstring`. If the input is zero or its absolute value is between 10^-4^ and 10^10^, the decimal dot notation is used in the output with 6 digits in the fraction part. Otherwise the exponential notation is used with automatic (at most 6) digits precision in the mantissa.

Example:

[source]
----
float2str (3.14) = "3.140000"
----

=== unichar2char

The function `unichar2char (universal charstring value) return charstring` converts a` universal charstring` value to a `charstring`. The elements of the input string are converted one by one. The function only converts universal characters when the conversion result lies between 0 end 127 (that is, the result is an ISO 646 character).

NOTE: The inverse conversion is implicit, that is, the `charstring` values are converted to `universal charstring` values automatically, without the need for a conversion function.

Example:

[source]
----
unichar2char(char(0,0,0,64)) = "@"
----

=== `log2str`

The function `log2str` can be used to log into `charstring` instead of the log file.

Syntax:

[source]
log2str (…) return charstring

This function can be parameterized in the same way as the `log` function, it returns a charstring value which contains the log string for all the provided parameters, but it does not contain the timestamp, severity and call stack information, thus the output does not depend on the runtime configuration file. The parameters are interpreted the same way as they are in the log function: their string values are identical to what the log statement writes to the log file. The extra information (timestamp, severity, call stack) not included in the output can be obtained by writing external functions which use the runtime’s Logger class to obtain the required data.

=== `testcasename`

The function `testcasename` returns the unqualified name of the actually executing test case. When it is called from the control part and no test case is being executed, it returns the empty string.

Syntax:

[source]
testcasename () return charstring

=== `isbound`

The function `isbound` behaves identically to the `isvalue` function with the following exception: it returns true for a record-of value which contains both initialized and uninitialized elements.

[source]
----
type record of integer rint;
var rint r_u; // uninitialized
isvalue(r_u); // returns false
isbound(r_u); // returns false also
//lengthof(r_u) would cause a dynamic testcase error

var rint r_0 := {} // zero length
isvalue(r_3); // returns true
isbound(r_3); // returns true
lengthof(r_3); // returns 0

var rint r_3 := { 0, -, 2 } // has a "hole"
isvalue(r_3); // returns false
isbound(r_3); // returns true
lengthof(r_3); // returns 3

var rint r_3full := { 0, 1, 2 }
isvalue(r_3full); // returns true
isbound(r_3full); // returns true
lengthof(r_3full); // returns 3
----

The introduction of `isbound` permits TTCN-3 code to distinguish between r_u and r_3; `isvalue` alone cannot do this (it returns false for both).

Syntax:
[source]
isbound (in template any_type i) return boolean;

=== `ttcn2string`

Syntax:
[source]
ttcn2string(in <TemplateInstance> ti) return charstring

This predefined function returns its parameter’s value in a string which is in TTCN-3 syntax. The returned string has legal ttcn-3 with a few exceptions such as unbound values. Unbound values are returned as “-“, which can be used only as fields of assignment or value list notations, but not as top level assignments (e.g. `x:=- is illegal`). Differences between the output format of `ttcn2string()` and `log2str()`:

[cols=",,",options="header",]
|===
|Value/template |`log2str()` |`ttcn2string()`
|Unbound value |`"<unbound>"` |“-“
|Uninitialized template |`"<uninitialized template>"` |“-“
|Enumerated value |`name (number)` |name
|===

=== `string2ttcn`

Syntax:

[source]
string2ttcn(in charstring ttcn_str, inout <reference> ref)

This predefined function does not have a return value, thus it is a statement. Any error in the input string will cause an exception that can be caught using @try - @catch blocks. The message string of the exception contains the exact cause of the error. There might be syntax and semantic errors. This function uses the module parameter parser of the TITAN runtime, it accepts the same syntax as the module parameters of the configuration file. Check the documentation chapters for the module parameters section. There are differences between the ttcn-3 syntax and the configuration file module parameters syntax, these are described in the documentation chapter of the module parameters. The second parameter must be a reference to a value or template variable.

Example code:

[source]
----
type record MyRecord { integer a, boolean b }

var template MyRecord my_rec
@try {
  string2ttcn("complement ({1,?},{(1,2,3),false}) ifpresent", my_rec)
  log(my_rec)
  }
  @catch (err_str) {
    log(“string2ttcn() failed: “, err_str)
  }

The log output will look like this:
complement ({ a := 1, b := ? }, { a := (1, 2, 3), b := false }) ifpresent
----

[[encode-base64]]
=== `encode_base64`

Syntax:

[source]
----
encode_base64(in octetstring ostr, in boolean
  use_linebreaks := false) return charstring
----

The function `encode_base64 (in octetstring ostr, in boolean use_linebreaks := false) return charstring `converts an octetstring `ostr` to a charstring. The charstring will contain the Base64 representation of `ostr`. The `use_linebreaks` parameter adds newlines after every 76 output characters, according to the MIME specs, if it is omitted, the default value is false.

Example:

[source]
----
encode_base64('42617365363420656E636F64696E6720736368656D65'O) ==
"QmFzZTY0IGVuY29kaW5nIHNjaGVtZQ=="
----

[[decode-base64]]
=== `decode_base64`

Syntax:

[source]
----
decode_base64(in charstring str) return octetstring
----

Kristof Szabados's avatar
Kristof Szabados committed
721
The function `decode_base64 (in charstring str) return octetstring` converts a charstring `str` encoded in Base64 to an octetstring. The octetstring will contain the decoded Base64 string of `str`.
Elemer Lelik's avatar
Elemer Lelik committed
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882

Example:

[source]
----
decode_base64("QmFzZTY0IGVuY29kaW5nIHNjaGVtZQ==") ==
'42617365363420656E636F64696E6720736368656D65'O
----

=== `json2cbor`

Syntax:

[source]
----
json2cbor(in universal charstring us) return octetstring
----

The function `json2cbor(in universal charstring us) return octetstring` converts a TITAN encoded json document into the binary representation of that json document using a binary coding called CBOR. The encoding follows the recommendations written in the CBOR standard <<13-references.adoc#_22, [22]>> section 4.2.

Example:

[source]
----
json2cbor("{"a":1,"b":2}") == ‘A2616101616202’O
----

=== `cbor2json`

Syntax:
[source]
----
cbor2json(in octetstring os) return universal charstring
----

The function `cbor2json(in octetstring os) return universal charstring` converts a CBOR encoded bytestream into a json document which can be decoded using the built in JSON decoder. The decoding follows the recommendations written in the CBOR standard <<13-references.adoc#_22, [22]>> section 4.1 except that the indefinite-length items are not made definite before conversion and the decoding of indefinite-length items is not supported.

Example:
[source]
----
cbor2json(‘A2616101616202’O) == "{"a":1,"b":2}"
----

=== `json2bson`

Syntax:
[source]
----
json2bson(in universal charstring us) return octetstring
----

The function `json2bson(in universal charstring us) return octetstring` converts a TITAN encoded json document into the binary representation of that json document using a binary coding called BSON. Only top level json objects and arrays can be encoded. (Note that an encoded top level json array will be decoded as a json object) The encoding follows the rules written in the BSON standard <<13-references.adoc#_23, [23]>>. The encoding handles the extension rules written in the MongoDB Extended JSON document <<13-references.adoc#_24, [24]>>. The encoding of 128-bit float values is not supported.

Example:
[source]
----
json2bson("{"a":1,"b":2}") == ‘13000000106100010000001062000200000000’O
----

=== `bson2json`

Syntax:
[source]
----
bson2json(in octetstring os) return universal charstring
----

The function `bson2json(in octetstring os) return universal charstring` converts a BSON encoded bytestream into a json document which can be decoded using the built in JSON decoder. The decoding follows the extension rules written in the BSON standard <<13-references.adoc#_23, [23]>>. The decoding handles the rules written in the MongoDB Extended JSON document <<13-references.adoc#_24, [24]>>. The decoding of 128-bit float values is not supported.

Example:
[source]
----
bson2json(‘13000000106100010000001062000200000000’O) == "{"a":1,"b":2}"
----

== Exclusive Boundaries in Range Subtypes

The boundary values used to specify range subtypes can be preceded by an exclamation mark. By using the exclamation mark the boundary value itself can be excluded from the specified range. For example integer range (!0..!10) is equivalent to range (1..9). In case of float type open intervals can be specified by using excluded boundaries, for example (0.0..!1.0) is an interval which contains 0.0 but does not contain 1.0.

[[special-float-values-infinity-and-not-a-number]]
== Special Float Values Infinity and not_a_number

The keyword infinity (which is also used to specify value range and size limits) can be used to specify the special float values –infinity and +infinity, these are equivalent to MINUS-INFINITY and PLUS-INFINITY used in ASN.1. A new keyword not_a_number has been introduced which is equivalent to NOT-A-NUMBER used in ASN.1. The -infinity and +infinity and not_a_number special values can be used in arithmetic operations. If an arithmetic operation’s operand is not_a_number then the result of the operation will also be not_a_number. The special value not_a_number cannot be used in a float range subtype because it’s an unordered value, but can be added as a single value, for example subtype (0.0 .. infinity, not_a_number) contains all positive float values and the not_a_number value.

[[ttcn-3-preprocessing]]
== TTCN–3 Preprocessing

Preprocessing of the TTCN-3 files with a C preprocessor is supported by the compiler. External preprocessing is used: the Makefile Generator generates a `Makefile` which will invoke the C preprocessor to preprocess the TTCN-3 files with the suffix `."ttcnpp`. The output of the C preprocessor will be generated to an intermediate file with the suffix `."ttcn`. The intermediate files contain the TTCN-3 source code and line markers. The compiler can process these line markers along with TTCN-3. If the preprocessing is done with the `-P` option <<13-references.adoc#_13, [13]>>, the resulting code will not contain line markers; it will be compatible with any standard TTCN-3 compiler. The compiler will use the line markers to give almost <<13-references.adoc#_14, [14]>> correct error or warning messages, which will point to the original locations in the `.ttcnpp` file. The C preprocessor directive `#"include` can be used in .ttcnpp files; the Makefile Generator will treat all files with suffix `."ttcnin` as TTCN-3 include files. The `."ttcnin` files will be added to the Makefile as special TTCN-3 include files which will not be translated by the compiler, but will be checked for modification when building the test suite.

Extract from the file:
[source]
----
Example.ttcnpp:
module Example {
function func()
{
#ifdef DEBUG
log("Example: DEBUG");
#else
log("Example: RELEASE");
#endif

}


----

The output is a preprocessed intermediate file `Example.ttcn`. The resulting output from the above code:
[source]
----

# 1 "Example.ttcnpp"
module Example {
function func()
{
log("Example: RELEASE");
}
----

The line marker (`# 1 "Example.ttcnpp"`) tells the compiler what the origin of the succeeding code is.

== Parameter List Extensions

In addition to standardized TTCN-3 parameter handling described in 5.4.2 of <<13-references.adoc#_1, [1]>> TITAN also supports the mixing of list notation and assignment notation in an actual parameter list.

=== Missing Named and Unnamed Actual Parameters

To facilitate handling of long actual parameter lists in the TITAN implementation, the actual parameter list consists of two optional parts: an unnamed part followed by a named part, in this order. In the actual parameter list a value must be assigned to every mandatory formal parameter either in the named part or in the unnamed part. (Mandatory parameter is one without default value assigned in the formal parameter list.) Consequently, the unnamed part, the named part or both may be omitted from the actual parameter list. Omitting the named part from the actual parameter lists provides backward compatibility with the standard notation.

The named and unnamed parts are separated by a comma as are the elements within both lists. It is not allowed to assign value to a given formal parameter in both the named and the unnamed part of the actual parameter list.

There can be at most one unnamed part, followed by at most one named part. Consequently, an unnamed actual parameter may not follow a named parameter.

Named actual parameters must follow the same relative order as the formal parameters. It is not allowed to specify named actual parameters in an arbitrary order.

Examples

The resulting parameter values are indicated in brackets in the comments:

[source]
----
function myFunction(integer p_par1, boolean p_par2 := true) { … }
control {
*// the actual parameter list is syntactically correct below:*
myFunction(1, p_par2 := false); // (1, false)
myFunction(2); // (2, true)
myFunction(p_par1 := 3, p_par2 := false); // (3, false)
*// the actual parameter list is syntactically erroneous below:*
myFunction(0, true, -); // too many parameters
myFunction(1, p_par1 := 1); // p_par1 is given twice
myFunction(); // no value is assigned to mandatory p_par1
myFunction(p_par2 := false, p_par1 := 3); // out of order
myFunction(p_par2 := false, 1); // unnamed part cannot follow
// named part
}
----

== `function`, `altstep` and `testcase` References

Although TITAN supports the behaviour type package (<<13-references.adoc#_5, [5]>>) of the TTCN-3 standard, but this feature was included in the standard with a different syntax.

Kristof Szabados's avatar
Kristof Szabados committed
883
It is allowed to create TTCN–3 types of `functions`, `altsteps` and `testcases`. Values, for example variables, of such types can carry references to the respective TTCN–3 definitions. To facilitate reference using, three new operations (`refers`, `derefers` and `apply`) were introduced. This new language feature allows to create generic algorithms in TTCN–3 with late binding, (i.e. code in which the function to be executed is specified only at runtime).
Elemer Lelik's avatar
Elemer Lelik committed
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131

[[function-types-with-a-runson-self-clause]]
== Function Types with a RunsOn_self Clause

A function type or an altstep type, defined with a standard `runs on` clause, can use all constants, variables, timers and ports given in the component type definition referenced by the `runs on` clause (see chapter 16 of <<13-references.adoc#_1, [1]>>).

A function type or an altstep type, defined with the TITAN-introduced `runs on self` clause, similarly, makes use of the resources of a component type; however, the component type in question is not given in advance. When an altstep or a function is called via a function variable, that is, a reference, using the `apply` operation, it can use the resources defined by the component type indicated in the `runs on` clause of the actually referenced function or altstep.

The "runs on self" construct is permitted only for `function` and `altstep` types. Any actual function or altstep must refer to a given component type name in their `runs on` clause.

A variable with type of function type is called a *function variable*. Such variables can contain references to functions or altsteps. At function variable assignment, component type compatibility checking is performed with respect to the component context of the assignment statement and the "runs on" clause of the assigned function or altstep. When the `apply()` operator is applied to a function variable, no compatibility checking is performed.

The rationale for this distinction is the following: due to type compatibility checking at the time of value assignment to the function variable, the TTCN-3 environment can be sure that any non-`null` value of the variable is a function reference that is component-type-compatible with that component that is actually executing the code using the `apply()` operator.

As a consequence of this, it is forbidden to use values of function variables as arguments to the TTCN-3 operators `start()` or `send()`.

Example of using the clause `runs on self` in a library

A component type may be defined as an extension of another component type (using the standard `extends` keyword mentioned in chapter 6.2.10.2 of <<13-references.adoc#_1, [1]>>). The effect of this definition is that the extended component type will implicitly contain all constant, variable, port and timer definitions from the parent type as well. In the example below, the component type `User_CT` aggregates its own constant, variable, port and timer definitions (resources) with those defined in the component type `Library_CT` (see line a).

The library developer writes a set of library functions that have a `runs on Library_CT` clause (see line h). Such library functions may offer optional references to other functions that are supposed to be specified by the user of the library (see line e). We say in this case that the library function may call user-provided *callback functions* via function variables. These function variables must have a type specified; optionally with a runs on clause. If this `runs on` clause refers to an actual component type name, then this actual type name must be known at the time of writing the library.

Library functions that runs on `Library_CT` can run on other component types as well, provided that the actual component type is compatible with `Library_CT` (see chapter 6.3.3 of <<13-references.adoc#_1, [1]>>). An obvious design goal for the library writer is to permit references to any callback function that has a component-type-compatible `runs on` clause. However, the cardinality of compatible component types is infinitely large; therefore, they *cannot* be explicitly referenced by the function type definitions of the library.

The "runs on self" concept provides a remedy for this contradiction and allows conceiving library components prepared to take up user-written "plug-ins".

In the code excerpt below, function `f_LibraryFunction` (which has the clause `runs on Library_CT`) uses the function reference variable `v_callBackRef_self` (defined in `Library_CT`).The function `f_MyCallbackFunction` (see line b) has a `runs on User_CT` clause. `User_CT` (see line a) extends `Library_CT`, therefore it is suitable for running library function with runs on `Library_CT` clause, for example.

When the assignment to the function variable `v_CallbackRef_self` is performed (see line c) inside `f_MyUserFunction` (that is, inside the context `User_CT`), then compatibility checking is performed. Since `User_CT` is compatible with `Library_CT`, the assignment is allowed.

Direct call to `f_MyCallbackFunction()` with `runs on User_CT` from a `runs on Library_CT` context (see line g) would cause semantic error according to the TTCN3 language. However, calling the function via `v_CallBackRef_self` is allowed (see line d).

[source]
----
module RunsOn_Self
{
//=========================================================================
// Function Types
//=========================================================================

//---- line f)
type function CallbackFunctionRefRunsonSelf_FT () runs on self;

//=========================================================================
//Component Types
//=========================================================================
type component Library_CT
{
//---- line e)
  var CallbackFunctionRefRunsonSelf_FT v_CallbackRef_self := null;
  var integer v_Lib;
}
//---- line a)
type component User_CT extends Library_CT
{
  var integer v_User;
}

//---- line h)
function f_LibraryFunction () runs on Library_CT
{
//---- line g)
  // Direct call of the callback function would cause semantic ERROR
//f_MyCallbackFunction();

  if (v_CallbackRef_self != null)
  {
    // Calling a function via reference that has a "runs on self" in its header
    // is always allowed with the exception of functions/altsteps without runs
    // on clause
//---- line d)
    v_CallbackRef_self.apply();
  }
}// end f_LibraryFunction

function f_MyUserFunction () runs on User_CT
{
  // This is allowed as f_MyCallbackFunction has runs on clause compatible
  // with the runs on clause of this function (f_MyUserFunction)
  // The use of function/altstep references with "runs on self" in their
  // headers is limited to call them on the given component instance; i.e.
  // allowed: assignments, parameterization and activate (the actual function's
  //          runs on is compared to the runs on of the function in which
  //          the operation is executed)
  // not allowed: start, sending and receiving
  // no check is needed for apply!
//---- line c)
  v_CallbackRef_self := refers (f_MyCallbackFunction);

  // This is allowed as Library_CT is a parent of User_CT
  // Pls. note, as the function is executing on a User_CT
  // instance, it shall never cause a problem of calling
  // a callback function with "runs on User_CT" from it.
  f_LibraryFunction();

}//end f_MyUserFunction

//---- line b)
function f_MyCallbackFunction () runs on User_CT
{/*application/dependent behaviour*/}

} // end of module RunsOn_Self
----

[[ttcn3-macros]]
== TTCN–3 Macros

The compiler and the run-time environment support the following non-standard macro notation in TTCN–3 modules. All TTCN–3 macros consist of a percent (%) character followed by the macro identifier. Macro identifiers are case sensitive. The table below summarizes the available macros and their meaning. Macro identifiers not listed here are reserved for future extension.

.TTCN-3 macros
[cols=",",options="header",]
|===
|Macro |Meaning
|`%moduleId` |name of the TTCN–3 module
|`%definitionId` |name of the top-level TTCN–3 definition
|`%testcaseId` |name of the test case that is currently being executed
|`%fileName` |name of the TTCN–3 source file
|`%lineNumber` |number of line in the source file
|===

The following rules apply to macros:

* All macros are substituted with a value of type `charstring`. They can be used as operands of complex expressions (concatenation, comparison, etc.).
* All macros except `%testcaseId` are evaluated during compilation and they can be used anywhere in the TTCN–3 module.
* Macro `%testcaseId` is evaluated at runtime. It can be used only within functions and altsteps that are being run on test components (on the MTC or PTCs) and within testcases. It is not allowed to use macro `%testcaseId` in the module control part. If a function or altstep that contains macro `%testcaseId` is called directly from the control part the evaluation of the macro results in a dynamic test case error.
* The result of macro `%testcaseId` is not a constant thus it cannot be used in the value of TTCN–3 constants. It is allowed only in those contexts where TTCN–3 variable references are permitted.
* Macro `%definitionId` is always substituted with the name of the top-level module definition that it is used in. <<13-references.adoc#_15, [15]>> For instance, if the macro appears in a constant that is defined within a function then the macro will be substituted with the function’s name rather than the one of the constant. When used within the control part macro `%definitionId` is substituted with the word "`control`".
* Macro `%fileName` is substituted with the name of the source file in the same form as it was passed to the compiler. This can be a simple file name, a relative or an absolute path name.
* The result of macro `%lineNumber` is always a string that contains the current line number as a decimal number. Numbering of lines starts from 1. All lines of the input file (including comments and empty lines) are counted. When it needs to be used in an integer expression a conversion is necessary: `str2int(%lineNumber)`. The above expression is evaluated during compilation without any runtime performance penalty.
* Source line markers are considered when evaluating macros `%fileName` and `%lineNumber`. In preprocessed TTCN–3 modules the macros are substituted with the original file name and line number that the macro comes from provided that the preprocessor supports it.
* When macros are used in `log()` statements, they are treated like literal strings rather than charstring value references. That is, quotation marks around the strings are not used and special characters within them are not escaped in the log file.
* For compatibility with the C preprocessor the compiler also recognizes the following C style macros: \\__FILE__ is identical to `%fileName` and \\__LINE__ is identical to `str2int(%lineNumber)`.
* Macros are not substituted within quotation marks (i.e. within string literals and attributes).
* The full power of TTCN–3 macros can be exploited in combination with the C preprocessor.

Example:
[source]
----
module M {
// the value of c_MyConst will be "M"
const charstring c_MyConst := %moduleId;
// MyTemplate will contain 28
template integer t_MyTemplateWithVeryLongName := lengthof(%definitionId);
function f_MyFunction() {
// the value of c_MyLocalConst1 will be "f_MyFunction"
const charstring c_MyLocalConst1 := %definitionId;
// the value of c_MyLocalConst2 will be "%definitionId"
const charstring c_MyLocalConst2 := "%definitionId";
// the value of c_MyLocalConst3 will be "12"
const charstring c_MyLocalConst3 := %lineNumber; //This is line 12
// the value of c_MyLocalConst4 will be 14
const integer c_MyLocalConst4 := str2int(%lineNumber);//This is line 14
// the line below is invalid because %testcaseId is not a constant
const charstring c_MyInvalidConst := %testcaseId;
// this is valid, of course
var charstring v_MyLocalVar := %testcaseId;
// the two log commands below give different output in the log file
log("function:", %definitionId, " testcase: “, %testcaseId);
// printout: function: f_MyFunction testcase: tc_MyTestcase
log("function:", c_MyLocalConst1, " testcase: “, v_MyLocalVar);
// printout: function: "f_MyFunction" testcase: "tc_MyTestcase"
}
}
----

== Component Type Compatibility

The ETSI standard defines type compatibility of component types for component reference values and for functions with "`runs on`" clause. In order to be compatible, both component types are required to have identical definitions (cf. <<13-references.adoc#_1, [1]>>, chapter 6.3.3).

NOTE: Compatibility is an asymmetric relation, if component type B is compatible with component type A, the opposite is not necessarily true. (E.g., component type B may contain definitions absent in component type A.)

All definitions from the parent type are implicitly contained when the keyword `extends` appears in the type definition (cf. <<13-references.adoc#_1, [1]>>, chapter 6.2.10.2) and so the required identity of the type definitions is ensured. The compiler considers component type B to be compatible with A if B has an `extends` clause, which contains A or a component type that is compatible with A.

Example:
[source]
----
type component A { var integer i; }
type component B extends A {
// extra definitions may be added here
}
----

In order to provide support for existing TTCN–3 code (e.g. standardized test suites) it is allowed to explicitly signal the compatibility relation between component types using a special `extension` attribute. Using such attributes shall be avoided in newly written TTCN–3 modules. Combining component type inheritance and the attribute `extension` is possible, but not recommended.

Thus, the compiler considers component type B to be compatible with A if B has an `extension` attribute that points to A as base component type and all definitions of A are present and identical in B.

[source]
----
type component A { var integer i; }
type component B {
var integer i; // definitions of A must be repeated
var octetstring o; // new definitions may be added
} with {
extension "extends A"
}
----

=== Implementation Restrictions

The list of definitions shared with different compatible component types shall be distinct. If component type Z is compatible with both X and Y and neither X is compatible with Y nor Y is compatible with X then X and Y shall not have definitions with identical names but different origin. If both X and Y are compatible with component type C then all definitions in X and Y which are originated from C are inherited by Z on two paths.

Example: According to the standard component type Z should be compatible with both X and Y, but the compatibility relation cannot be established because X and Y have a definition with the same name.

[source]
----
type component X { timer T1, T2; }
type component Y { timer T1, T3; }
type component Z { timer T1, T2, T3; }
with { extension "extends X, Y" }
// invalid because the origin of T1 is ambiguous
----

The situation can be resolved by introducing common ancestor C for X and Y, which holds the shared definition.

[source]
----
type component C { timer T1; }
type component X { timer T1, T2; } with { extension "extends C" }
type component Y { timer T1, T3; } with { extension "extends C" }
type component Z {
timer T1, // origin is C
T2, // origin is X
T3; // origin is Y
} with { extension "extends X, Y" }
----

Circular compatibility chains between component types are not allowed. If two component types need to be defined as identical, type aliasing must be used instead of compatibility.

The following code is invalid:

[source]
----
type component A {

// the same definitions as in B
} with { extension "extends B" }
type component B {

// the same definitions as in A
} with { extension "extends A" }
----

When using the non-standard extension attribute the initial values of the corresponding definitions of compatible components should be identical. The compiler does not enforce this for all cases; however, in the case of different initial values the resulting run-time behavior is undefined. If the initial values cannot be determined at compile time (module parameters) the compiler will remain silent. In other situations the compiler may report an error or a warning.

All component types are compatible with each empty component type. Empty components are components which have neither own nor inherited definitions.

== Implicit Message Encoding

1132
The TTCN–3 standard <<13-references.adoc#_1, [1]>> does not specify a standard way of data encoding/decoding. TITAN has a common {cpp} API for encoding/decoding; to use this API external functions are usually needed. The common solution is to define a TTCN–3 external function and write the {cpp} code containing the API calls. In most cases the {cpp} code explicitly written to an auxiliary {cpp} file contains only simple code patterns which call the encoding/decoding API functions on the specified data. In TITAN there is a TTCN–3 language extension which automatically generates such external functions.
Elemer Lelik's avatar
Elemer Lelik committed
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174

Based on this automatic encoding/decoding mechanism, dual-faced ports are introduced. Dual-faced ports have an external and an internal interface and can automatically transform messages passed through them based on mapping rules defined in TTCN–3 source files. These dual-faced ports eliminate the need for simple port mapping components and thus simplify the test configuration.

[[dual-faced-ports]]
=== Dual-faced Ports

In the TTCN–3 standard (<<13-references.adoc#_1, [1]>>), a port type is defined by listing the allowed incoming and outgoing message types. Dual-faced ports have on the other hand two different message lists: one for the external and one for the internal interface. External and internal interfaces are given in two distinct port type definitions. The dual-faced concept is applicable to message based ports and the message based part of mixed ports.

Dual-faced port types must have `user` attribute to designate its external interface. The internal interface is given by the port type itself. A port type can be the external interface of several different dual-faced port types.

The internal interface is involved in communication operations (`send`, `receive`, etc.) and the external interface is used when transferring messages to/from other test components or the system under test. The operations `connect` and `map` applied on dual-faced ports will consider the external port type when checking the consistency of the connection or mapping.<<13-references.adoc#_16, [16]>>

==== Dual-faced Ports between Test Components

Dual-faced ports used for internal communication must have the attributes `internal` in addition to `user` (see section <<visibility-modifiers, Visibility Modifiers>>). The referenced port type describing the external interface may have any attributes.

==== Dual-faced Ports between Test Components and the SUT

The port type used as external interface must have the attribute `provider`. These dual-faced port types do not have their own test port; instead, they use the test port belonging to the external interface when communicating to SUT. Using the attribute `provider` implies changes in the Test Port API of the external interface. For details see the section "Provider port types" in <<13-references.adoc#_16, [16]>>.

If there are several entities within the SUT to be addressed, the dual-faced port type must have the attribute `address` in addition to `user`. In this case the external interface must have the attribute `address` too. For more details see section <<visibility-modifiers, Visibility Modifiers>>.

=== Type Mapping

Mapping is required between the internal and external interfaces of the dual-faced ports because the two faces are specified in different port type definitions, thus, enabling different sets of messages.

Messages passing through dual-faced ports will be transformed based on the mapping rules. Mapping rules must be specified for the outgoing and incoming directions separately. These rules are defined in the attribute `user` of the dual-faced port type.

An outgoing mapping is applied when a `send` operation is performed on the dual-faced port. The outcome of the mapping will be transmitted to the destination test component or SUT. The outgoing mappings transform each outgoing message of the internal interface to the outgoing messages of the external interface.

An incoming mapping is applied when a message arrives on a dual-faced port from a test component or the SUT. The outcome of the mapping will be inserted into the port queue and it will be extracted by the `receive` operation. The incoming mappings transform each incoming messages of the external interface to the incoming message of the internal interface.

==== Mapping Rules

A mapping rule is an elementary transformation step applied on a message type (source type) resulting in another message type (target type). Source type and target type are not necessarily different.

Mapping rules are applied locally in both directions, thus, an error caused by a mapping rule affects the test component owning the dual-faced port, not its communication partner.

Mappings are given for each source type separately. Several mapping targets may belong to the same source type; if this is the case, all targets must be listed immediately after each other (without repeating the source type).

The following transformation rules may apply to the automatic conversion between the messages of the external and internal interfaces of a dual-faced port:

Kristof Szabados's avatar
Kristof Szabados committed
1175
* No conversion. Applicable to any message type, this is a type preserving mapping, no value conversion is performed. Source and target types must be identical. This mapping does not have any options. For example, control or status indication massages may transparently be conveyed between the external and the internal interfaces. Keyword used in attribute `user` of port type definition: `simple`.
Elemer Lelik's avatar
Elemer Lelik committed
1176
1177
* Message discarding. This rule means that messages of the given source type will not be forwarded to the opposite interface. Thus, there is no destination type, which must be indicated by the not used symbol (-). This mapping does not have any options. For example, incoming status indication massages of the external interface may be omitted on the internal interface. Keyword used in attribute `user` of port type definition: `discard`.
* Conversion using the built-in codecs. Here, a corresponding encoding or decoding subroutine of the built-in codecs (for example RAW, TEXT or BER) is invoked. The conversion and error handling options are specified with the same syntax as used for the encoding/decoding functions, see section <<attribute-syntax, Attribute Syntax>>. Here, source type corresponds to input type and target type corresponds to output type of the encoding. Keyword used in attribute `user` of port type definition: `encode` or `decode`; either followed by an optional `errorbehavior`.
1178
* Function or external function. The transformation rule may be described by an (external) function referenced by the mapping. The function must have the attribute `extension` specifying one of the prototypes given in section <<encoder-decoder-function-prototypes, Encoder/decoder Function Prototypes>>. The incoming and the outgoing type of the function must be equal to the source and target type of the mapping, respectively. The function may be written in TTCN-3, {cpp} or generated automatically by the compiler. This mapping does not have any options. Keyword used in attribute `user` of port type definition: `function`.
Elemer Lelik's avatar
Elemer Lelik committed
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208

==== Mapping with One Target

Generally speaking, a source type may have one or more targets. Every mapping target can be used alone. However, only one target can be designated with the following rules if

* no conversion takes place (keyword `simple`);
* encoding a structured message (keyword `encode`) <<13-references.adoc#_17, [17]>>;
* an (external) function with prototype `convert` or `fast` is invoked

==== Mapping with More Targets

On the other hand, more than one target is needed, when the type of an encoded message must be reconstructed. An octetstring, for example, can be decoded to a value of more than one structured PDU type. It is not necessary to specify mutually exclusive decoder rules. It is possible and useful to define a catch-all rule at the end to handle invalid messages.

The following rules may be used with more than one target if

* an (external) function with prototype `backtrack` is invoked;
* decoding a structured message (keyword `decode`);
* (as a last alternative) the source message is `discarded`

The conversion rules are tried in the same order as given in the attribute until one of them succeeds, that is, the function returns `0 OK` or decoding is completed without any error. The outcome of the successful conversion will be the mapped result of the source message. If all conversion rules fail and the last alternative is `discard`, then the source message is discarded. Otherwise dynamic test case error occurs.

==== Mapping from Sliding Buffer

Using sliding buffers is necessary for example, if a stream-based transport, like TCP, is carrying the messages. A stream-based transport is destroying message boundaries: a message may be torn apart or subsequent messages may stick together.

The following rules may be used with more than one target when there is a sliding buffer on the source side if

* an (external) function with prototype `sliding` is invoked;
* decoding a structured message (keyword `decode`)

Kristof Szabados's avatar
Kristof Szabados committed
1209
Above rules imply that the source type of this mapping be either `octetstring` or `charstring`. The run-time environment maintains a separate buffer for each connection of the dual-faced port. Whenever data arrives to the buffer, the conversion rules are applied on the buffer in the same order as given in the attribute. If one of the rules succeeds (that is, the function returns `0` or decoding is completed without any error) the outcome of the conversion will appear on the destination side. If the buffer still contains data after successful decoding, the conversion is attempted again to get the next message. If one of the rules indicates that the data in the buffer is insufficient to get an entire message (the function returns `2 INCOMPLETE_MESSAGE` or decoding fails with error code `ET_INCOMPL_MSG`), then the decoding is interrupted until the next fragment arrives in the buffer. If all conversion rules fail (the function returns `1 NOT_MY_TYPE` or decoding fails with any other error code than `ET_INCOMPL_MSG`), dynamic test case error occurs.
Elemer Lelik's avatar
Elemer Lelik committed
1210

1211
1212
NOTE: Decoding with sliding should be the last decoding option in the list of decoding options and there should be only one decoding with sliding buffer. In other cases the first decoding with sliding buffer might disable reaching later decoding options.

Elemer Lelik's avatar
Elemer Lelik committed
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
[[encoder-decoder-function-prototypes]]
=== Encoder/decoder Function Prototypes

Encoder/decoder functions are used to convert between different data (message) structures. We can consider e.g. an octet string received from the remote system that should be passed to the upper layer as a TCP message.

Prototypes are attributes governing data input/output rules and conversion result indication. In other words, prototypes are setting the data interface types. The compiler will verify that the parameters and return value correspond to the given prototype. Any TTCN–3 function (even external ones) may be defined with a prototype. There are four prototypes defined as follows:

* prototype `convert`
+
Functions of this prototype have one parameter (i.e. the data to be converted), which shall be an `in` value parameter, and the result is obtained in the return value of the function.
+
Example:
[source]
----
external function f_convert(in A param_ex) return B
with { extension "prototype(convert)" }
----
+
The input data received in the parameter `param_ex` of type A is converted. The result returned is of type B.

* prototype `fast`
+
Functions of this prototype have one input parameter (the same as above) but the result is obtained in an `out` value parameter rather than in return value. Hence, a faster operation is possible as there is no need to copy the result if the target variable is passed to the function. The order of the parameters is fixed: the first one is always the input parameter and the last one is the output parameter.
+
Example:
[source]
----
external function f_fast(in A param_1, out B param_2)
with { extension "prototype(fast)" }
----
+
The input data received in the parameter `param_1` of type A is converted. The resulting data of type B is contained in the output parameter `param_2` of type B.

* prototype `backtrack`
+
Functions of this prototype have the same data input/output structure as of prototype `fast`, but there is an additional integer value returned to indicate success or failure of the conversion process. In case of conversion failure the contents of the output parameter is undefined. These functions can only be used for decoding. The following return values are defined to indicate the outcome of the decoding operation:
+
--
** 0 (`OK`). Decoding was successful; the result is stored in the out parameter.

** 1 (`NOT_MY_TYPE`). Decoding was unsuccessful because the input parameter does not contain a valid message of type `B`. The content of the out parameter is undefined.
--
+
Example:
[source]
----
external function f_backtrack(in A param_1, out B param_2) return integer
with { extension "prototype(backtrack)" }
----

The input data received in the parameter `param_1` of type A is converted. The resulting data of type B is contained in the output parameter `param_2` of type B. The function return value (an integer) indicates success or failure of the conversion process.

* prototype `sliding`
+
Functions of this prototype have the same behavior as the one of prototype backtrack, consequently, these functions can only be used for decoding. The difference is that there is no need for the input parameter to contain exactly one message: it may contain a fragment of a message or several concatenated messages stored in a FIFO buffer. The first parameter of the function is an `inout` value parameter, which is a reference to a buffer of type `octetstring` or `charstring`. The function attempts to recognize an entire message. It if succeeds, the message is removed from the beginning of the FIFO buffer, hence the name of this prototype: sliding (buffer). In case of failure the contents of the buffer remains unchanged. The return value indicates success or failure of the conversion process or insufficiency of input data as follows:
+
--
** 0 (`OK`). Decoding was successful; the result is stored in the out parameter. The decoded message was removed from the beginning of the inout parameter which is used as a sliding buffer.

** 1 (`NOT_MY_TYPE`). Decoding was unsuccessful because the input parameter does not contain or start with a valid message of type B. The buffer (`inout` parameter) remains unchanged. The content of out parameter is undefined.

** 2 (`INCOMPLETE_MESSAGE`). Decoding was unsuccessful because the input stream does not contain a complete message (i.e. the end of the message is missing). The input buffer (inout parameter) remains unchanged. The content of out parameter is undefined.
--
+
Example:
[source]
----
external function f_sliding(inout A param_1, out B param_2) return integer
with { extension "prototype(sliding)" }
----
+
The first portion of the input data received in the parameter `param_1` of type `A` is converted. The resulting data of type B is contained in the output parameter `param_2` of type `B`. The return value indicates the outcome of the conversion process.

[[automatic-generation-of-encoder-decoder-functions]]
=== Automatic Generation of Encoder/decoder Functions

1289
Encoding and decoding is performed by {cpp} external functions using the built-in codecs. These functions can be generated automatically by the complier. The present section deals with attributes governing the function generation.
Elemer Lelik's avatar
Elemer Lelik committed
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306

==== Input and Output Types

Automatically generated encoder/decoder functions must have an attribute `prototype` assigned. If the encoder/decoder function has been written manually, only the attribute `prototype` may be given. Automatically generated encoder/decoder functions must have either the attribute `encode` or the attribute `decode`. In the case of encoding, the input type of the function must be the (structured) type to be encoded, which in turn must have the appropriate encoding attributes needed for the specified encoding method. The output type of the encoding procedure must be `octetstring` (BER, RAW, XER and JSON coding) or `charstring` (TEXT coding). In case of decoding the functions work the other way around: the input type is `octetstring` or `charstring` and the output type can be any (structured) type with appropriate encoding attributes.

[[attribute-syntax]]
==== Attribute Syntax

The syntax of the `encode` and `decode` attributes is the following:

[source]
----
("encode"|"decode") "("("RAW"|"BER"|"TEXT"|"XER"|"JSON") [":" <codec_options>] ")"
----

BER encoding can be applied only for ASN.1 types.

1307
The <`codec_options`> part specifies extra options for the particular codec. Currently it is applicable only in case of BER and XML encoding/decoding. The `codec_options` are copied transparently to the parameter list of the {cpp} encoder/decoder function call in the generated function body without checking the existence or correctness of the referenced symbols.
Elemer Lelik's avatar
Elemer Lelik committed
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339

Example of prototype `convert`, BER encoding and decoding (the PDU is an ASN.1 type):
[source]
----
external function encode_PDU(in PDU pdu) return octetstring
with { extension "prototype(convert) encode(BER:BER_ENCODE_DER)" }
external function decode_PDU(in octetstring os) return PDU
with { extension "prototype(convert) decode(BER:BER_ACCEPT_ALL)" }
----

Example of prototype `convert`, XML encoding and decoding (the PDU is a TTCN-3 type):
[source]
----
external function encode_PDU(in PDU pdu) return octetstring
with { extension "prototype(convert) encode(XER:XER_EXTENDED)" }
external function decode_PDU(in octetstring os) return PDU
with { extension "prototype(convert) decode(XER:XER_EXTENDED)" }
----

[[codec-error-handling]]
==== Codec Error Handling

The TITAN codec API has some well defined function calls that control the behavior of the codecs in various error situations during encoding and decoding. An error handling method is set for each possible error type. The default error handling method can be overridden by specifying the `errorbehavior` attribute:

[source]
----
"errorbehavior" "(" <error_type> ":" <error_handling>
{ "," <error_type> ":" <error_handling> } ")"
----

Possible error types and error handlings are defined in <<13-references.adoc#\_16, [16]>>, section "The common API". The value of `<error_type>` shall be a value of type `error_type_t` without the prefix `ET_`. The value of `<error_handling>` shall be a value of type `error_behavior_t` without the prefix `EB_`.

1340
The TTCN–3 attribute `errorbehavior(INCOMPL_ANY:ERROR)`, for example, will be mapped to the following {cpp} statement:
Elemer Lelik's avatar
Elemer Lelik committed
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
[source]
----
TTCN_EncDec::set_error_behavior(TTCN_EncDec::ET_INCOMPL_ANY,
  TTCN_EncDec::EB_ERROR);
----

When using the `backtrack` or `sliding` decoding functions, the default error behavior has to be changed in order to avoid a runtime error if the `in` or `inout` parameter does not contain a type we could decode. With this change an integer value is returned carrying the fault code. Without this change a dynamic test case error is generated. Example:

[source]
----
external function decode_PDU(in octetstring os, out PDU pdu) return integer
with {
extension "prototype(backtrack)"
extension "decode(BER:BER_ACCEPT_LONG|BER_ACCEPT_INDEFINITE)"
extension "errorbehavior(ALL:WARNING)"
}
----

=== Handling of encode and variant attributes

The TITAN compiler offers two different ways of handling encoding-related attributes:

* the new (standard compliant) handling method, and
* the legacy handling method, for backward compatibility.

==== New codec handling

This method of handling `encode` and `variant` attributes is active by default. It supports many of the newer encoding-related features added to the TTCN-3 standard.

Differences from the legacy method:

* `encode` and `variant` attributes can be defined for types as described in the TTCN-3 standard (although the type restrictions for built-in codecs still apply);
* a type can have multiple `encode` attributes (this provides the option to choose from multiple codecs, even user-defined ones, when encoding values of that type);
* ASN.1 types automatically have `BER`, `JSON`, `PER` (see section <<PER-encoding, PER encoding and decoding through user defined functions>>), and XML (if the compiler option `-a` is set) encoding (they are treated as if they had the corresponding `encode` attributes);
* encoding-specific `variant` attributes are supported(e.g.: `variant "XML"."untagged"`);
* the parameters `encoding_info/decoding_info` and `dynamic_encoding` of predefined functions `encvalue`, `decvalue`, `encvalue_unichar` and `decvalue_unichar` are supported (the `dynamic_encoding` parameter can be used for choosing the codec to use for values of types with multiple encodings; the `encoding_info`/`decoding_info` parameters are currently ignored);
* the `self.setencode` version of the `setencode` operation is supported (it can be used for choosing the codec to use for types with multiple encodings within the scope of the current component);
* the `@local` modifier is supported for `encode` attributes;
* a type’s the default codec (used by `decmatch` templates, the @decoded modifier, and the predefined functions `encvalue`, `decvalue`, `encvalue_unichar` and `decvalue_unichar` when no dynamic encoding parameter is given) is:
* its one defined codec, if it has exactly one codec defined; or
* unspecified, if it has multiple codecs defined (the mentioned methods of encoding/decoding can only be used in this case, if a codec was selected for the type using `self.setencode`).

Differences from the TTCN-3 standard:

* switching codecs during the encoding or decoding of a structure is currently not supported (the entire structure will be encoded or decoded using the codec used at top level);
* the port-specific versions of the `setencode` operation are not supported (since messages sent through ports are not automatically encoded; see also dual-faced ports in section <<dual-faced-ports, Dual-faced Ports>>);
* the `@local` modifier only affects encode attributes, it does not affect the other attribute types;
* `encode` and `variant` attributes do not affect `constants`, `templates`, `variables`, `template` `variables` or `import` statements (these are accepted, but ignored by the compiler);
* references to multiple definitions in attribute qualifiers is not supported(e.g.: `encode` (`template all except` (`t1`)) "`RAW`");
* retrieving attribute values is not supported (e.g.: `var universal charstring x := MyType.encode`).

[[legacy-codec-handling]]
==== Legacy codec handling

This is the method of handling encode and variant attributes that was used before version 6.3.0 (/6 R3A). It can be activated through the compiler command line option `-e`.

Differences from the new method:

* each codec has its own rules for defining `encode` and `variant` attributes;
* a type can only have one `encode` attribute (if more than one is defined, then only the last one is considered), however, it can have `variant` attributes that belong to other codecs (this can make determining the default codec tricky);
Kristof Szabados's avatar
Kristof Szabados committed
1401
* ASN.1 types automatically have `BER`, `JSON`, `PER` (see section <<PER-encoding, PER encoding and decoding through user defined functions>>), and `XML` (if the compiler option -a is set) encoding, however the method of setting a default codec (for the predefined functions `encvalue`, `decvalue`, `encvalue_unichar`, `decvalue_unichar`, for `decmatch` templates, and for the `@decoded` modifier) is different (see section <<setting-the-default-codec-for-asn-1-types, Setting the default codec for ASN.1 types>>);
1402
* encoding-specific `variant` attributes are not supported (e.g.: `variant "XML"."untagged"`);
Elemer Lelik's avatar
Elemer Lelik committed
1403
1404
* the parameters `encoding_info/decoding_info` and `dynamic_encoding` of predefined functions `encvalue`, `decvalue`, `encvalue_unichar` and `decvalue_unichar` are ignored;
* the `setencode` operation is not supported;
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
* the `@local` modifier is not supported.
* the TTCN-3 language elements that automatically encode or decode (i.e. predefined functions `encvalue`, `decvalue`, `encvalue_unichar` and `decvalue_unichar`, `decmatch` templates, and value and parameter redirects with the `@decoded` modifier) ignore the `encode` and `variant` attributes in reference types and encode/decode values as if they were values of the base type (only the base type's `encode` and `variant` attributes are in effect in these cases). Encoder and decoder external functions take all of the type's attributes into account. For example:

[source]
----
type record BaseType {
  integer field1,
  charstring field2
}
with {
  encode "XML";
  variant "name as uncapitalized";
}

type BaseType ReferenceType
with {
  encode "XML";
  variant "name as uncapitalized";
}

external function f_enc(in ReferenceType x) return octetstring
  with { extension "prototype(convert) encode(XER:XER_EXTENDED)" }

function f() {
  var ReferenceType val := { field1 := 3, field2 := "abc" };
  
  var charstring res1 := oct2char(bit2oct(encvalue(val)));
  // "<baseType>\n\t<field>3</field>\n</baseType>\n\n"
  // it's encoded as if it were a value of type 'BaseType',
  // the name and attributes of type 'ReferenceType' are ignored
  
  var charstring res2 := oct2char(f_enc(val));
  // "<referenceType>\n\t<field>3</field>\n</referenceType>\n\n"
  // it's encoded correctly, as a value of type 'ReferenceType'
}
----
Elemer Lelik's avatar
Elemer Lelik committed
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492

The differences from the TTCN-3 standard listed in the previous section also apply to the legacy method.

[[setting-the-default-codec-for-asn-1-types]]
===== Setting the default codec for ASN.1 types

Since ASN.1 types cannot have `encode` or `variant` attributes, the compiler determines their encoding type by checking external encoder or decoder functions (of built-in encoding types) declared for the type.

The TITAN runtime does not directly call these external functions, they simply indicate which encoding type to use when encoding or decoding the ASN.1 type in question through predefined functions `encvalue` and `decvalue`, decoded content matching (`decmatch` templates) and in value and parameter redirects with the `@decoded` modifier.

These external functions can be declared with any prototype, and with the regular stream type of either `octetstring` or `charstring` (even though `encvalue` and `decvalue` have `bitstring` streams).

The ASN.1 type cannot have several external encoder or decoded functions of different (built-in or PER) encoding types, as this way the compiler won’t know which encoding to use. Multiple encoder or decoder functions of the same encoding type can be declared for one type.

NOTE: These requirements are only checked if there is at least one `encvalue`, `decvalue`, `decmatch` template or decoded parameter or value redirect in the compiled modules. They are also checked separately for encoding and decoding (meaning that, for example, multiple encoder functions do not cause an error if only `decvalues` are used in the modules and no `encvalues`). +
The compiler searches all modules when attempting to find the coder functions needed for a type (including those that are not imported to the module where the encvalue, decvalue, decmatch or @decoded is located).

Example:
[source]
----
external function f_enc_seq(in MyAsnSequenceType x) return octetstring
with { extension "prototype(convert) encode(JSON)" }

external function f_dec_seq(in octetstring x, out MyAsnSequenceType y)
with { extension "prototype(fast) decode(JSON)" }



var MyAsnSequenceType v_seq := { num := 10, str := "abc" };
var bitstring v_enc := encvalue(v_seq); // uses the JSON encoder

var MyAsnSequenceType v_seq2;
var integer v_result := decvalue(v_enc, v_seq2); // uses the JSON decoder
----

[[calling-user-defined-encoding-functions-with-encvalue-and-decvalue]]
=== Calling user defined encoding functions with encvalue and decvalue

The predefined functions `encvalue` and `decvalue` can be used to encode and decode values with user defined external functions (custom encoding and decoding functions).

These functions must have the `encode`/`decode` and `prototype` extension attributes, similarly to built-in encoder and decoder functions, except the name of the encoding (the string specified in the `encode` or `decode` extension attribute) must not be equal to any of the built-in encoding names (e.g. BER, TEXT, XER, etc.).

The compiler generates calls to these functions whenever `encvalue` or `decvalue` is called, or whenever a matching operation is performed on a `decmatch` template, or whenever a redirected value or parameter is decoded (with the `@decoded` modifier), if the value’s type has the same encoding as the encoder or decoder function (the string specified in the type’s `encode` attribute is equivalent to the string in the external function’s `encode` or `decode` extension attribute).

Restrictions:

* only one custom encoding and one custom decoding function can be declared per user-defined codec (only checked if `encvalue`, `decvalue`, `decmatch` or `@decoded` are used at least once on the type)
* the prototype of custom encoding functions must be `convert`
* the prototype of custom decoding functions must be `sliding`
* the stream type of custom encoding and decoding functions is `bitstring`

NOTE: Although theoretically variant attributes can be added for custom encoding types, their coding functions would not receive any information about them, so they would essentially be regarded as comments. If custom variant attributes are used, the variant attribute parser’s error level must be lowered to warnings with the compiler option `-E`. +
1493
The compiler searches all modules when attempting to find the coder functions needed for a type (including those that are not imported to the module where the `encvalue`, `decvalue`, `decmatch` or `@decoded` is located; if this is the case, then an extra include statement is added in the generated {cpp} code to the header generated for the coder function’s module).
Elemer Lelik's avatar
Elemer Lelik committed
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529

Example:
[source]
----
type union Value {
  integer intVal,
  octetstring byteVal,
  charstring strVal
  }
with {
  encode "abc";
}

external function f_enc_value(in Value x) return bitstring
 with { extension "prototype(convert) encode(abc)" }

external function f_dec_value(inout bitstring b, out Value x) return integer
with { extension "prototype(sliding) decode(abc)" }



var Value x := { intVal := 3 };
var bitstring bs := encvalue(x); // equivalent to f_enc_value(x)

var integer res := decvalue(bs, x); // equivalent to f_dec_value(bs, x)
----

[[PER-encoding]]
=== PER encoding and decoding through user defined functions

TITAN does not have a built-in PER codec, but it does provide the means to call user defined PER encoder and decoder external functions when using `encvalue`, `decvalue`, `decmatch` templates, and value and parameter redirects with the `@decoded` modifier.

This can be achieved the same way as the custom encoder and decoder functions described in section <<calling-user-defined-encoding-functions-with-encvalue-and-decvalue, Calling user defined encoding functions with encvalue and decvalue>>, except the name of the encoding (the string specified in the encode or decode extension attribute) must be PER.

This can only be done for ASN.1 types, and has the same restrictions as the custom encoder and decoder functions. There is one extra restriction when using legacy codec handling (see section <<setting-the-default-codec-for-asn-1-types, Setting the default codec for ASN.1 types>>): an ASN.1 type cannot have both a PER encoder/decoder function and an encoder/decoder function of a built-in type set (this is checked separately for encoding and decoding).

1530
NOTE: The compiler searches all modules when attempting to find the coder functions needed for a type (including those that are not imported to the module where the `encvalue`, `decvalue`, `decmatch` or `@decoded` is located; if this is the case, then an extra include statement is added in the generated {cpp} code to the header generated for the coder function’s module).
Elemer Lelik's avatar
Elemer Lelik committed
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576

Example:
[source]
----
external function f_enc_per(in MyAsnSequenceType x) return bitstring
with { extension "prototype(convert) encode(PER)" }

external function f_dec_per(in bitstring x, out MyAsnSequenceType y)
with { extension "prototype(fast) decode(PER)" }



var MyAsnSequenceType x := { num := 10, str := "abc" };
var bitstring bs := encvalue(x); // equivalent to f_enc_per(x)

var MyAsnSequenceType y;
var integer res := decvalue(bs, y); // equivalent to f_dec_per(bs, y);
----

=== Common Syntax of Attributes

All information related to implicit message encoding shall be given as `extension` attributes of the relevant TTCN–3 definitions. The attributes have a common basic syntax, which is applicable to all attributes given in this section:

* Whitespace characters (spaces, tabulators, newlines, etc.) and TTCN–3 comments are allowed anywhere in the attribute text. Attributes containing only comments, whitespace or both are simply ignored +
Example: +
`with { extension “/* this is a comment */" }`
* When a definition has multiple attributes, the attributes can be given either in one attribute text separated by whitespace or in separate TTCN–3 attributes. +
Example: +
`with { extension "address provider" }` means exactly the same as +
`with { extension "address"; extension "provider" }`
* Settings for a single attribute, however, cannot be split in several TTCN–3 attributes. +
Example of an invalid attribute: +
`with { extension "prototype("; extension "convert)" }`
* Each kind of attribute can be given at most once for a definition. +
Example of an invalid attribute: +
`with { extension "internal internal" }`
* The order of attributes is not relevant. +
Example: +
`with { extension "prototype(fast) encode(RAW)" }` means exactly the same as +
`with { extension "encode(RAW) prototype(fast)" }`
* The keywords introduced in this section, which are not TTCN–3 keywords, are not reserved words. The compiler will recognize the word properly if it has a different meaning (e.g. the name of a type) in the given context. +
Example: the attribute +
`with { extension "user provider in(internal -> simple: function(prototype))" }` can be a valid if there is a port type named `provider`; `internal` and `simple` are message types and `prototype` is the name of a function.

=== API describing External Interfaces

1577
Since the default class hierarchy of test ports does not allow sharing of {cpp} code with other port types, an alternate internal API is introduced for port types describing external interfaces. This alternate internal API is selected by giving the appropriate TTCN–3 extension attribute to the port. The following extension attributes or attribute combinations can be used:
Elemer Lelik's avatar
Elemer Lelik committed
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658

.Port extension attributes
[cols=",,,,,",options="header",]
|===
|*Attribute(s)* |*Test Port* |*Communication with SUT allowed* |*Using of SUT addresses allowed* |*External interface* |*Notes*
|nothing |normal |yes |no |own |
|internal |none |no |no |own |
|address |see <<13-references.adoc#_16, [16]>> "Support of address type" |yes |yes |own |
|provider |see <<13-references.adoc#_16, [16]>> "Provider port types" |yes |no |own |
|internal provider |none |no |no |own |means the same as internal
|address provider |see <<13-references.adoc#_16, [16]>> "Support of address type" and "Provider port types" |yes |yes |own |
|user PT … |none |yes |depends on PT |PT |PT must have attribute provider
|internal user PT … |none |no |no |PT |PT can have any attributes
|address user PT … |none |yes |yes |PT |PT must have attributes address and provider
|===

=== BNF Syntax of Attributes

[source]
----
FunctionAttributes ::= {FunctionAttribute}
FunctionAttribute ::= PrototypeAttribute | TransparentAttribute

ExternalFunctionAttributes ::= {ExternalFunctionAttribute}
ExternalFunctionAttribute ::= PrototypeAttribute | EncodeAttribute | DecodeAttribute | ErrorBehaviorAttribute

PortTypeAttributes ::= {PortTypeAttribute}
PortTypeAttribute ::= InternalAttribute | AddressAttribute | ProviderAttribute | UserAttribute

PrototypeAttribute ::= "prototype" "(" PrototypeSetting ")"
PrototypeSetting ::= "convert" | "fast" | "backtrack" | "sliding"

TransparentAttribute ::= "transparent"

EncodeAttribute ::= "encode" "(" EncodingType [":" EncodingOptions] ")"
EncodingType ::= "BER" | "RAW" | "TEXT"| "XER" | "JSON"
EncodingOptions ::= {ExtendedAlphaNum}

DecodeAttribute ::= "decode" "(" EncodingType [":" EncodingOptions] ")"

ErrorBehaviorAttribute ::= "errorbehavior" "(" ErrorBehaviorSetting {"," ErrorBehaviorSetting} ")"
ErrorBehaviorSetting ::= ErrorType ":" ErrorHandling
ErrorType ::= ErrorTypeIdentifier | "ALL"
ErrorHandling ::= "DEFAULT" | "ERROR" | "WARNING" | "IGNORE"

InternalAttribute ::= "internal"

AddressAttribute ::= "address"

ProviderAttribute ::= "provider"

UserAttribute ::= "user" PortTypeReference {InOutTypeMapping}
PortTypeReference ::= [ModuleIdentifier "."] PortTypeIdentifier
InOutTypeMapping ::= ("in" | "out") "(" TypeMapping {";" TypeMapping} ")"
TypeMapping ::= MessageType "->" TypeMappingTarget {"," TypeMappingTarget}
TypeMappingTarget ::= (MessageType ":" (SimpleMapping | FunctionMapping | EncodeMapping | DecodeMapping)) | ("-" ":" DiscardMapping)

MessageType ::= PredefinedType | ReferencedMessageType
ReferencedMessageType ::= [ModuleIdentifier "."] (StructTypeIdentifier | EnumTypeIdentifier | SubTypeIdentifier | ComponentTypeIdentifier)

SimpleMapping ::= "simple"

FunctionMapping ::= "function" "(" FunctionReference ")"
FunctionReference ::= [ModuleIdentifier "."] (FunctionIdentifier | ExtFunctionIdentifier)

EncodeMapping ::= EncodeAttribute [ErrorBehaviorAttribute]

DecodeMapping ::= DecodeAttribute [ErrorBehaviorAttribute]

DiscardMapping ::= "discard"
----

Non-terminal symbols in bold are references to the BNF of the TTCN-3 Core Language (Annex A, <<13-references.adoc#_1, [1]>>)

Example:
[source]
----
type record ControlRequest { }
type record ControlResponse { }
type record PDUType1 { }
type record PDUType2 { }
1659
// the encoder/decoder functions are written in {cpp}
Elemer Lelik's avatar
Elemer Lelik committed
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
external function enc_PDUType1(in PDUType1 par) return octetstring
with { extension "prototype(convert)" }
external function dec_PDUType1(in octetstring stream,
out PDUType1 result) return integer
with { extension "prototype(backtrack)" }

// port type PT1 is the external interface of the dual-faced port
// with its own Test Port. See section "The purpose of Test Ports" in the API guide.

type port PT1 message {
out ControlRequest;
in ControlResponse;
inout octetstring;
} with { extension "provider" }

// port type PT2 is the internal interface of the dual-faced port
// This port is communicating directly with the SUT using the Test Port of PT1.

type port PT2 message {
out ControlRequest;
inout PDUType1, PDUType2;
} with { extension “user PT1

out(ControlRequest -> ControlRequest: simple;
PDUType1 -> octetstring: function(enc_PDUType1);
PDUType2 -> octetstring: encode(RAW))
in(ControlResponse -> - : discard;
octetstring -> PDUType1: function(dec_PDUType1),

PDUType2: decode(RAW),
* : discard)"
}

type component MTC_CT {
port PT2 MTC_PORT;
}

type component SYSTEM_SCT {
port PT1 SYSTEM_PORT;
}
testcase tc_DUALFACED () runs on MTC_CT system SYSTEM_SCT

{
map(mtc:MTC_PORT, system:SYSTEM_PORT);
MTC_PORT.send(PDUType1:{…});
MTC_PORT.receive(PDUType1:?);
}
----

The external face of the dual-faced port (defined by `PT1`) sends and receives the protocol massages as octetstrings. On the internal face of the same dual-faced port (defined by `PT2`) the octetstring is converted to two message types (`PDUType1`, `PDUType2`). The conversion happens both when sending and when receiving messages.

When sending messages, messages of type `PDUType1` will be converted as defined by the function `enc_PDUType1`; whereas messages of type `PDUType2` will be converted using the built-in conversion rules RAW.

When a piece of octetstring is received, decoding will first be attempted using the function `dec_PDUType1`; in successful case the resulting structured type has `PDUType1`. When decoding using the function `dec_PDUType1` is unsuccessful, the octetstring is decoded using the built-in conversion rules RAW; the resulting message is of type `PDUType2`. When none of the above conversion succeeds, the octetstring will be discarded.

`ControlRequest` and `ControlResponse` will not be affected by a conversion in either direction.

image::images/dualfaced.png[Dual-faced port]

== RAW Encoder and Decoder

The RAW encoder and decoder are general purpose functionalities developed originally for handling legacy protocols.

The encoder converts abstract TTCN-3 structures (or types) into a bitstream suitable for serial transmission.

The decoder, on the contrary, converts the received bitstream into values of abstract TTCN-3 structures.

This section covers the <<general-rules-and-restrictions, coding rules in general>>, the <<attributes, attributes controlling them>> and the <<ttcn-3-types-and-their-attributes, attributes allowed for a particular type>>.

You can use the encoding rules defined in this section to encode and decode the following TTCN–3 types:

* bitstring
* boolean
* charstring
* enumerated
* float
* hexstring
* integer
* octetstring
* record
* record of, set of
* set
* union
* universal charstring

The compiler will produce code capable of RAW encoding/decoding if

. The module has attribute 'encode "RAW", in other words at the end of the module there is a text +
`with { encode "RAW" }`

. Compound types have at least one `variant` attribute. When a compound type is only used internally or it is never RAW encoded/decoded then the attribute `variant` has to be omitted.

[NOTE]
====
When a type can be RAW encoded/decoded but with default specification then the empty variant specification can be used: variant "". +
In order to reduce the code size the TITAN compiler only add the RAW encoding if

a. Either the type has a RAW variant attribute OR +
b. The type is used by an upper level type definition with RAW variant attribute.
====

Example: In this minimal introductory example there are two types to be RAW encoded: OCT2 and CX_Frame but only the one of them has RAW variant attribute.
[source]
----
module Frame {
external function enc_CX_frame( in CX_Frame cx_message ) return octetstring
with { extension "prototype(convert) encode(RAW)" }

external function dec_CX_frame( in octetstring stream ) return CX_Frame
with { extension "prototype(convert) decode(RAW)" }

type octetstring OCT2 length(2);
type record CX_Frame

{
OCT2 data_length,
octetstring data_stream
} with { variant "" }
} with { encode "RAW" }
----

[[general-rules-and-restrictions]]
=== General Rules and Restrictions

The TTCN-3 standard defines a mechanism using `attributes` to define, among others, encoding variants (see <<13-references.adoc#_1, [1]>>, chapter 27 "Specifying attributes"). However, the `attributes` to be defined are implementation specific. This and the following chapters describe each `attribute` available in TITAN.

==== General Rules

If an `attribute` can be assigned to a given type, it can also almost always be assigned to the same type of fields in a `record`, set or `union`. Attributes belonging to a `record` or `set` field overwrites the effect of the same attributes specified for the type of the field.

The location of an attribute is evaluated before the attribute itself. This means that if an attribute is overwritten thanks to its qualification or the overwriting rules, or both, its validity at the given location will not be checked.

It is not recommended to use the attributes `LENGTHTO`, `LENGTHINDEX`, `TAG`, `CROSSTAG`, `PRESENCE`, `UNIT`, `POINTERTO`, `PTROFFSET` with dotted qualifiers as it may lead to confusion.

Octetstrings and records with extension bit shall be octet aligned. That is, they should start and end in octet boundary.

Error encountered during the encoding or decoding process are handled as defined in section "Setting error behavior" in <<13-references.adoc#_16, [16]>>.

=== Rules Concerning the Encoder

The encoder doesn’t modify the data to be encoded; instead, it substitutes the value of calculated fields (`length`, `pointer`, `tag`, `crosstag` and `presence` fields) with the calculated value in the encoded bitfield if necessary.

The value of the `pointer` and `length` fields are calculated during encoding and the resulting value will be used in sending operations. During decoding, the decoder uses the received length and pointer information to determine the length and the place of the fields.

During encoding, the encoder sets the value of the `presence`, `tag` and `crosstag` fields according to the presence of the `optional` and `union` fields.

=== Rule Concerning the Decoder

The decoder determines the presence of the optional fields on the basis of the value of the `tag`, `crosstag` and `presence` fields.

[[attributes]]
=== Attributes

An `attribute` determines coding and encoding rules. In this section the `attributes` are grouped according to their function.

==== Attributes Governing Conversion of TTCN-3 Types into Bitfields

This section defines the attributes describing how a TTCN-3 type is converted to a bitfield.

*BITORDERINFIELD*

Attribute syntax: `BITORDERINFIELD(<parameter>)`

Parameters allowed: `msb`, `lsb`

Default value: `lsb`

Can be used with: stand-alone types, or a field of a `record` or `set`.

Description: This attribute specifies the order of the bits within a field. When set to `msb`, the first bit sent will be the most significant bit of the original field. When set to `lsb`, the first bit sent will be the least significant bit of the original field.

Comment: The effect of `BITORDERINFIELD(msb)` is equal to the effect of `BITORDER(msb) BYTORDER(last)`.

Example:
[source]
----
type bitstring BITn
with {
variant "BITORDERINFIELD(lsb)"
}

const BITn c_bits := ’10010110’B
//Encoding of c_bits gives the following result: 10010110

type bitstring BITnreverse
with {
variant "BITORDERINFIELD(msb)"
}

const BITnreverse c_bitsrev := ’10010110’B
//Encoding of c_bitsrev gives the following result: 01101001
----

*COMP*

Attribute syntax: `COMP(<parameter>)`

Parameters allowed: `nosign`, `2scompl`, `signbit`

Default value: `nosign`

Can be used with: stand-alone types or the field of a `record` or `set`.

Description: This attribute specifies the type of encoding of negative integer numbers as follows: +
`nosign`: negative numbers are not allowed; +
`2scompl`: 2’s complement encoding; +
`signbit`: sign bit and the absolute value is coded. (Only with integer and enumerated types.)

Examples:
[source]
----
//Example number 1): coding with sign bit
type integer INT1
with {
variant "COMP(signbit)";
variant "FIELDLENGTH(8)"
}

const INT1 c_i := -1
//Encoded c_i: 10000001 ’81’O
// sign bitˆ
//Example number 2): two's complement coding
type integer INT2 with {variant "COMP(2scompl)";
variant "FIELDLENGTH(8)"
}

const INT2 c_i2 := -1
//Encoded c_i2: 11111111 ’FF’O
----

*FIELDLENGTH*

Attribute syntax: `FIELDLENGTH(<parameter>)`

balaskoa's avatar
typo    
balaskoa committed
1894
Parameters allowed: `variable`, `null_terminated` (for `charstring` and universal `charstring` types only) positive integer
Elemer Lelik's avatar
Elemer Lelik committed
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248

Default value: `variable`, 8 (for `integer` type only)

Can be used with:

* `integer`;
* `enumerated`;
* `octetstring`;
* `charstring`;
* `bitstring`;
* `hexstring`;
* `universal charstring`;
* `record` fields;
* `set` fields;
* `record of` types;
* `set of` types.

Description: `FIELDLENGTH` specifies the length of the encoded type. The units of the parameter value for specific types are the following:

* `integer, enumerated, bitstring:` bits;
* `octetstring, universal charstring:` octets;
* `charstring:` characters;
* `hexstring:` hex digits;
* `set of/record of:` elements.

The value 0 means variable length or, in case of the enumerated type, the minimum number of bits required to display the maximum `enumerated` value. `Integer` cannot be coded with variable length.

NOTE: If `FIELDLENGTH` is not specified, but a TTCN–3 length restriction with a fixed length is, then the restricted length will be used as `FIELDLENGTH`.

Examples:
[source]
----
//Example number 1): variable length octetstring
type octetstring OCTn
with {
variant "FIELDLENGTH(variable)"
}

//Example number 2): 22 bit length bitstrings
type bitstring BIT22
with {
variant "FIELDLENGTH(22)"
}

type record SomeRecord {
bitstring field
}

with {
variant (field) "FIELDLENGTH(22)"
}

// Null terminated strings
type charstring null_str with {variant "FIELDLENGTH(null_terminated)"}
type universal charstring null_ustr with { variant "FIELDLENGTH(null_terminated)"}
----

*N bit / unsigned N bit*

Attribute syntax: `[unsigned] <parameter> bit`

Parameters allowed: positive integer

Default value: -

Can be used with:

* `integer`;
* `enumerated`;
* `octetstring`;
* `charstring`;
* `bitstring`;
* `hexstring`;
* `record` fields;
* `set` fields.

Description: This attribute sets the `FIELDLENGTH`, `BYTEORDER` and `COMP` attributes to the following values:

* `BYTEORDER` is set to `last`.
* `N bit` sets `COMP` to `signbit`, while `unsigned` `N` `bit` sets `COMP` to `nosign` (its default value).
* Depending on the encoded value’s type `FIELDLENGTH` is set to: +
`integer, enumerated, bitstring, boolean:` N; +
`octetstring, charstring:` N / 8; +
`hexstring:` N / 4.

NOTE: If `FIELDLENGTH` is not specified, but a TTCN–3 length restriction with a fixed length is, then the restricted length will be used as `FIELDLENGTH`.

The `[unsigned] <parameter> bits` syntax is also supported but the usage of `bit` keyword is preferred.

Examples:
[source]
----
//Example number 1): integer types
type integer Short (-32768 .. 32767)
with { variant "16 bit" };

// is equal to:
type integer ShortEq (-32768 .. 32767)
with { variant "FIELDLENGTH(16), COMP(signbit), BYTEORDER(last)" };

type integer UnsignedLong (0 .. 4294967295)
with { variant "unsigned 32 bit" };

// is equal to:
type integer UnsignedLongEq (0 .. 4294967295)
with { variant "FIELDLENGTH(32), COMP(nosign), BYTEORDER(last)" };

//Example number 2): string types
type hexstring HStr20
with { variant "unsigned 20 bit" };

// 20 bits = 5 hex nibbles, `unsigned' is ignored
type hexstring HStr20Eq
with { variant "FIELDLENGTH(5), BYTEORDER(last)" };

type octetstring OStr32
with { variant "32 bit" };

// 32 bits = 4 octets
type octetstring OStr32Eq
with { variant "FIELDLENGTH(4), BYTEORDER(last)" };

type charstring CStr64 with
{ variant "64 bit" };

// 64 bits = 8 characters
type charstring CStr64Eq
with { variant "FIELDLENGTH(8), BYTEORDER(last)" };
----

*FORMAT*

Attribute syntax: `FORMAT(<parameter>)`

Parameters allowed: `IEEE754 double`, `IEEE754 float`

Default value: `IEEE754 double`

Can be used with: `float` type.

Description: `FORMAT` specifies the encoding format of `float` values. +
`IEEE754 double:` The `float` value is encoded as specified in standard IEEE754 using 1 sign bit, 11 exponent bits and 52 bits for mantissa. +
`IEEE754 float:` The `float` value is encoded as specified in standard IEEE754 using 1 sign bit, 8 exponent bits and 23 bits for mantissa.

Examples:
[source]
----
//Example number 1): single precision float
type float Single_float
with {
variant "FORMAT(IEEE754 float)"
}

//Example number 2): double precision float
type float Double_float
with {
variant "FORMAT(IEEE754 double)"
}
----

==== Attributes Controlling Conversion of Bitfields into a Bitstream

This section defines the attributes describing how bits and octets are put into the buffer.

*BITORDER*

Attribute syntax: `BITORDER(<parameter>)`

Parameters allowed: `msb`, `lsb`

Default value: `lsb`

Can be used with: stand-alone types or the field of a `record` or `set`.

Description: This attribute specifies the order of the bits within an octet. When set to `lsb`, the first bit sent will be the least significant bit of the original byte. When set to `msb`, the first bit sent will be the most significant bit of the original byte. When applied to an `octetstring` using the extension bit mechanism, only the least significant 7 bits are reversed, the 8th bit is reserved for the extension bit.

Examples:
[source]
----
// Example number 1)
type octetstring OCT
with {
variant "BITORDER(lsb)"
}

const OCT c_oct := ’123456’O

//The encoded bitfield: 01010110 00110100 00010010
// last octet^ ^first octet
// The buffer will have the following content:
// 00010010
// 00110100
// 01010110

//The encoding result in the octetstring ’123456’O

//Example number 2)
type octetstring OCTrev
with {
variant "BITORDER(msb)"
}

const OCTrev c_octr := ’123456’O

//The encoded bitfield: 01010110 00110100 00010010

// last octet^ ^first octet

//The buffer will have the following content:
// 01001000
// 00101100
// 01101010

//The encoding results in the octetstring ’482C6A’O

//Example number 3)

type bitstring BIT12 with {
variant "BITORDER(lsb), FIELDLENGTH(12)"
}

const BIT12 c_bits:=’101101101010’B
//The encoded bitfield: 1011 01101010

// last octet^ ^first octet

The buffer will have the following content:
// 01101010
// ….1011
// ^ next field

//The encoding will result in the octetstring ’6A.9’O

//Example number 4)
type bitstring BIT12rev with {
variant "BITORDER(msb), FIELDLENGTH(12)"
}

const BIT12 c_BIT12rev:=’101101101010’B
//The encoded bitfield: 1011 01101010
// last octet^ ^first octet
//The buffer will have the following content:
// 01010110
// ….1101
// ^ next field
//The encoding will result in the octetstring ’56.D’O
----

*BYTEORDER*

Attribute syntax: `BYTEORDER(<parameter>)`

Parameters allowed: `first`, `last`

Default value: `first`

Can be used with: stand-alone types or the field of a `record` or `set`.

Description: The attribute determines the order of the bytes in the encoded data.

* `first`: The first octet placed first into the buffer.
* `last`: The last octet placed first into the buffer.

Comment: The attribute has no effect on a single octet field.

NOTE: The attribute works differently for `octetstring` and `integer` types. The ordering of bytes is counted from left-to-right (starting from the MSB) in an `octetstring` but right-to-left (starting from the LSB) in an `integer`. Thus, the attribute `BYTEORDER(first)` for an `octetstring` results the same encoded value than `BYTEORDER(last)` for an `integer` having the same value.

Examples:
[source]
----
//Example number 1)
type octetstring OCT
with {
variant "BYTEORDER(first)"
}

const OCT c_oct := ’123456’O
//The encoded bitfield: 01010110 00110100 00010010
// last octet^ ^first octet

The buffer will have the following content:
// 00010010
// 00110100
// 01010110

//The encoding will result in the octetstring ’123456’O

//Example number 2)
type octetstring OCTrev
with {variant "BYTEORDER(last)"
}

const OCTrev c_octr := ’123456’O
//The encoded bitfield: 01010110 00110100 00010010
// last octet^ ^first octet

//The buffer will have the following content:

// 01010110

// 00110100

// 00010010

The encoding will result in the octetstring ’563412’O
//Example number 3)
type bitstring BIT12 with {
variant "BYTEORDER(first), FIELDLENGTH(12)"
}

const BIT12 c_bits:=’100101101010’B
//The encoded bitfield: 1001 01101010
// last octet^ ^first octet
The buffer will have the following content:
// 01101010
// ….1001
// ^ next field

//The encoding will result in the octetstring ’6A.9’O
//Example number 4)
type bitstring BIT12rev with {
variant "BYTEORDER(last), FIELDLENGTH(12)"
}

const BIT12rev c_bits:=’100101101010’B
//The encoded bitfield: 1001 01101010
// last octet^ ^first octet
//The buffer will have the following content:
// 10010110
// ….1010
// ^ next field
//The encoding will result in the octetstring ’96.A’O

----

*FIELDORDER*

Attribute syntax: `FIELDORDER(<parameter>)`

Parameters allowed: `msb`, `lsb`

Default value: `lsb`

Can be used with: `record` or `set` types. It can also be assigned to a group of fields of a `record`.

Description: The attribute specifies the order in which consecutive fields of a structured type are placed into octets.
* `msb:` coded bitfields are concatenated within an octet starting from MSB, when a field stretches the octet boundary, it continues at the MSB of next the octet.
* `lsb:` coded bitfields are concatenated within an octet starting from LSB, when a field stretches the octet boundary, it continues at the LSB of next the octet.

Comment: Fields within an octet must be coded with the same `FIELDORDER`. +
Fields are always concatenated in increasing octet number direction. +
`FIELDORDER` has a slightly different effect than order attributes. While the `FIELDORDER` shifts the location of coded bitfields inside octets, the order attributes describes the order of the bits within a bitfield. +
There is NO connection between the effect of the `FIELDORDER` and the effects of the other order attributes.

2249
2250
NOTE: The attribute does not extend to lower level structures. If the same field order is desired for the fields of a lower level `record`/`set`, then that `record`/`set` also needs a `FIELDORDER` attribute.

Elemer Lelik's avatar
Elemer Lelik committed
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
Examples:
[source]
----
//Example number 1)
type record MyRec_lsb {
BIT1 field1,
BIT2 field2,
BIT3 field3,
BIT4 field4,
BIT6 field5
}

with { variant "FIELDORDER(lsb)" }
const MyRec_lsb c_pdu := {
field1:=’1’B // bits of field1: a
field2:=’00’B // bits of field2: b
field3:=’111’B // bits of field3: c
field4:=’0000’B // bits of field4: d
field5:=’111111’B // bits of field5: e
}

//Encoding of c_pdu will result in:
// 00111001 ddcccbba
// 11111100 eeeeeedd
//Example number 2)

type record MyRec_msb {
BIT1 field1,
BIT2 field2,
BIT3 field3,
BIT4 field4,
BIT6 field5
}

with { variant "FIELDORDER(msb)" }
const MyRec_msb c_pdu2 := {
field1:=’1’B // bits of field1: a
field2:=’00’B // bits of field2: b
field3:=’111’B // bits of field3: c
field4:=’0000’B // bits of field4: d
field5:=’111111’B // bits of field5: e
}

//Encoding of c_pdu2 will result in:
// 10011100 abbcccdd
// 00111111 ddeeeeee
----

*HEXORDER*

Attribute syntax: `HEXORDER(<parameter>)`

Parameters allowed: `low`, `high`

Default value: `low`

Can be used with: `hexstring` or `octetstring` type.

Description: Order of the hexs in the encoded data.
* `low:` The hex digit in the lower nibble of the octet is put in the lower nibble of the octet in the buffer.
* `high:` The hex digit in the higher nibble of the octet is put in the lower nibble of the octet in the buffer. (The value is swapped)

NOTE: Only the whole octet is swapped if necessary. For more details see the example.

Examples:
[source]
----
//Example number 1)
type hexstring HEX_high
with {variant "HEXORDER(high)"}

const HEX_high c_hexs := ’12345’H
//The encoded bitfield: 0101 00110100 00010010
// last octet^ ^first octet

//The buffer will have the following content:
// 00010010 12
// 00110100 34
// ….0101 .5
// ^ next field
//The encoding will result in the octetstring ’1234.5’O

//Example number 2)
type hexstring HEX_low
with {variant "HEXORDER(low)"}
const HEX_low c_hexl := ’12345’H

//The encoded bitfield: 0101 00110100 00010010
// last octet^ ^first octet
//The buffer will have the following content:
// 00100001 21
// 01000011 43
// ….0101 .5 ←not twisted!
// ^ next field
//The encoding will result in the octetstring ’2143.5’O

//Example number 3)
type octetstring OCT
with {variant "HEXORDER(high)"}

const OCT c_hocts := ’1234’O
//The encoded bitfield: 00110100 00010010
// last octet^ ^first octet
//The buffer will have the following content:
// 00100001 21
// 01000011 43
//The encoding will result in the octetstring ’2143’O
----

2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
*CSN.1 L/H*

Attribute syntax: `CSN.1 L/H`

Default value: unset

Can be used with: all basic types, `records`/`sets`/`unions` (in which case the attribute is set for all fields of the `record`/`set`/`union`)

Description: If set, the bits in the bitfield are treated as the relative values `L` and `H` from `CSN.1` instead of their absolute values (`0` is treated as `L` and `1` is treated as `H`). These values are encoded in terms of the default padding pattern '2B'O ('00101011'B), depending on their position in the bitstream.

Practically the bits in the bitfield are XOR-ed with the pattern '2B'O before being inserted into the stream.

Example:
[source]
----
type integer uint16_t
with { variant "FIELDLENGTH(16)" variant "CSN.1 L/H" }

const uint16_t c_val := 4080;
// Without the variant attribute "CSN.1 L/H" this would be encoded as '11110000 00001111'B
// With the variant attribute "CSN.1 L/H" this would be encoded as '11011011 00100100'B
----

Elemer Lelik's avatar
Elemer Lelik committed
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
==== Extension Bit Setting Attributes

This section defines the attributes describing the extension bit mechanism.

The extension bit mechanism allows the size of an Information Element (IE) to be increased by using the most significant bit (MSB, bit 7) of an octet as an extension bit. When an octet within an IE has bit 7 defined as an extension bit, then the value `0' in that bit position indicates that the following octet is an extension of the current octet. When the value is `1', the octet is not continued.

*EXTENSION_BIT*

Attribute syntax: `EXTENSION_BIT(<parameter>)`

Parameters allowed: `no`, `yes`, `reverse`

Default value: none

Can be used with:

* `octetstring`,
* (fields of a) `record`,
* `set`,
* `record of`,
* `set of`.

Description: When `EXTENSION_BIT` is set to `yes`, then each MSB is set to 0 except the last one which is set to 1. When `EXTENSION_BIT` is set to `reverse`, then each MSB is set to 1 and the MSB of the last octet is set to 0 to indicate the end of the Information Element. When `EXTENSION_BIT` is set to `no`, then no changes are made to the MSBs.

NOTE: In case of the types `record` of and `set of` the last bit of the element of the structured type will be used as `EXTENSION_BIT`. The data in the MSBs will be overwritten during the encoding. When `EXTENSION_BIT` belongs to a record, the field containing the `EXTENSION_BIT` must explicitly be declared in the type definition. Also the last bit of the element of `record of` and `set of` type shall be reserved for `EXTENSION_BIT` in the type definition.

Examples:
[source]
----
//Example number 1)
octetstring OCTn
with {variant "EXTENSION_BIT(reverse)"}
const OCTn c_octs:=’586211’O

//The encoding will have the following result:
// 11011000
// 11100010
// 00010001
// ˆ the overwritten EXTENSION_BITs

//The encoding will result in the octetstring ’D8E211’O
//Example number 2)

type record Rec3 {
BIT7 field1,
BIT1 extbit1,
BIT7 field2 optional,
BIT1 extbit2 optional
}

with { variant "EXTENSION_BIT(yes)" }
const Rec3 c_MyRec{
field1:=’1000001’B,
extbit1:=’1’B,
field2:=’1011101’B,
extbit2:=’0’B
}

//The encoding will have the following result:
// 01000001
// 11011101
// ˆ the overwritten EXTENSION_BITs

The encoding will result in the octetstring ’41DD’O

//Example number 3)
type record Rec4{
BIT11 field1,
BIT1 extbit
}

type record of Rec4 RecList
with { variant "EXTENSION_BIT(yes)"}
const RecList c_recs{
{ field1:=’10010011011’B, extbit:=’1’B}
{ field1:=’11010111010’B, extbit:=’0’B}
}

//The encoding will have the following result:
// 10011011
// 10100100
// 11101011
// ˆ the overwritten EXTENSION_BITs

//The encoding will result in the octetstring ’9BA4EB’O
----

*EXTENSION_BIT_GROUP*

Attribute syntax: `EXTENSION_BIT_GROUP(<param1, param2, param3>)`

Parameters allowed: `param1: no, yes, reverse` +
                    `param2: first_field_name`, +
                    `param3: last_field_name`

Default value: none

Can be used with: a group of `record` fields

Description: The `EXTENSION_BIT_GROUP` limits the extension bit mechanism to a group of the fields of a `record` instead of the whole `record`. +
`first_field_name`: the name of the first field in the +
`grouplast_field_name`: the name of the last field in the group

NOTE: Multiple group definition is allowed to define more groups within one `record`. Every group must be octet aligned and the groups must not overlap.

Example:
[source]
----
type record MyPDU{
OCT1 header,
BIT7 octet2info,
BIT1 extbit1,
BIT7 octet2ainfo optional,
BIT1 extbit2 optional,
OCT1 octet3,
BIT7 octet4info,
BIT1 extbit3,
BIT7 octet4ainfo optional,
BIT1 extbit4 optional,
} with {
variant "EXTENSION_BIT_GROUP(yes,octet2info,extbit2)";
variant "EXTENSION_BIT_GROUP(yes,octet4info,extbit4)"
}

const MyPDU c_pdu:={
header:=’0F’O,
octet2info:=’1011011’B,
extbit1:= ’0’B,
octet2ainfo:= omit,
extbit2:= omit,
octet3:=’00’O,
octet4info:=’0110001’B,
extbit3:=’1’B,
octet4ainfo:=’0011100’B,
extbit4:=’0’B,
}

//The encoding will have the following result:
// 00001111
// **1**1011011
// 00000000
// **0**0110001
// **1**0011100
// ˆ the overwritten extension bits
//The encoding will result in the octetstring: ’0FDB00319C’O
----

==== Attributes Controlling Padding

This section defines the attributes that describe the padding of fields.

*ALIGN*

Attribute syntax: `ALIGN(<parameter>)`

Parameters allowed: `left`, `right`

2540
Default value: `left` for `octetstrings`, `right` for all other types
Elemer Lelik's avatar
Elemer Lelik committed
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904

Can be used with: stand-alone types or the field of a `record` or `set`.

Description: This attribute has meaning when the length of the actual value can be determined and is less than the specified `FIELDLENGTH`. In such cases the remaining bits/bytes will be padded with zeros. The attribute `ALIGN` specifies the sequence of the actual value and the padding within the encoded bitfield. +
`right`: The LSB of the actual value is aligned to the LSB of coded bitfield +
`left`: The MSB of the actual value is aligned to the MSB of coded bitfield

NOTE: It has no meaning during decoding except if the length of the actual value can be determined from the length restriction of the type. In this case the `ALIGN` also specifies the order of the actual value and the padding within the encoded bitfield.

Examples:
[source]
----
//Example number 1)
type octetstring OCT10
with {
variant "ALIGN(left)";
variant "FIELDLENGTH(10)"
}

const OCT10 c_oct := ’0102030405’O
//Encoded value: ’01020304050000000000’O
//The decoded value: ’01020304050000000000’O
//Example number 2)
type octetstring OCT10length5 length(5)
with {
variant "ALIGN(left)";
variant "FIELDLENGTH(10)"
}

const OCT10length5 c_oct5 := ’0102030405’O
//Encoded value: ’01020304050000000000’O
//The decoded value: ’0102030405’O
----

*PADDING*

Attribute syntax: `PADDING(<parameter>)`

Parameters allowed:

* `no`
* `yes`
* `octet`
* `nibble`
* `word16`
* `dword32`
* integer to specify the padding unit and allow padding.

Default value: none

Can be used with: This attribute can belong to any types.

Description: This attribute specifies that an encoded type shall *end* at a boundary fixed by a multiple of `padding` unit bits counted from the beginning of the message. The default padding unit is 8 bits. If `PADDING` is set to `yes`, then unused bits of the last octets of the encoded type will be filled with padding pattern. If `PADDING` is set to `no`, the next field will use the remaining bits of the last octet. If padding unit is specified, then the unused bits between the end of the field and the next padding position will be filled with padding pattern.

NOTE: It is possible to use different padding for every field of structured types. The padding unit defined by `PADDING` and `PREPADDING` attributes can be different for the same type.

Examples:
[source]
----
//Example number 1)
type BIT5 Bit5padded with { variant "PADDING(yes)"}

const Bit5padded c_bits:=’10011’B

//The encoding will have the following result:
// 00010011
// ˆ the padding bits
//The encoding will result in the octetstring ’13’O

//Example number 2)
type record Paddedrec{
BIT3 field1,
BIT7 field2
} with { variant "PADDING(yes)"}

const Paddedrec c_myrec:={
field1:=’101’B,
field2:=’0110100’B
}

//The encoding will have the following result:
// 10100101
// 00000001
// ˆ the padding bits

//The encoding will result in the octetstring ’A501’O

//Example number 3): padding to 32 bits
type BIT5 Bit5padded_dw with { variant "PADDING(dword32)"}
const Bit5padded_dw c_dword:=’10011’B
//The encoding will have the following result:
// 00010011
// 00000000
// 00000000
// 00000000
// ˆ the padding bits

The encoding will result in the octetstring ’13000000’O

//Example number 4)
type record Paddedrec_dw{
BIT3 field1,
BIT7 field2
} with { variant "PADDING(dword32)"}
const Paddedrec_dw c_dwords:={
field1:=’101’B,
field2:=’0110100’B
}

//The encoding will have the following result:
// 10100101
// 00000001
// 00000000
// 00000000
// ˆ the padding bits
The encoding will result in the octetstring ’A5010000’O
----

*PADDING_PATTERN*

Attribute syntax: `PADDING_PATTERN(<parameter>)`

Parameters allowed: bitstring

Default value: `’0’B`

Can be used with: any type with attributes `PADDING` or `PREPADDING`.

Description: This attribute specifies padding pattern used by padding mechanism. The default padding pattern is ’0’B.If the specified padding pattern is shorter than the padding space, then the padding pattern is repeated.

Comment: For a particular field or type only one padding pattern can be specified for `PADDING` and `PREPADDING`.

Examples:
[source]
----
//Example number 1)
type BIT8 Bit8padded with {
variant "PREPADDING(yes), PADDING_PATTERN(’1’B)"
}

type record PDU {
BIT3 field1,
Bit8padded field2
} with {variant ""}

const PDU c_myPDU:={
field1:=’101’B,
field2:=’10010011’B
}

//The encoding will have the following result:
// 11111101
// 10010011
//the padding bits are indicated in bold
//The encoding will result in the octetstring ’FD93’O
//Example number 2): padding to 32 bits

type BIT8 Bit8pdd with {
variant "PREPADDING(dword32), PADDING_PATTERN(’10’B)"
}

type record PDU{
BIT3 field1,
Bit8pdd field2
} with {variant ""}
const PDU c_myPDUplus:={
field1:=’101’B,
field2:=’10010011’B
}

//The encoding will have the following result:
// 01010101
// 01010101
// 01010101
// 01010101
// 10010011
//The padding bits are indicated in bold

//The encoding will result in the octetstring ’5555555593’O
----

*PADDALL*

Attribute syntax: PADDALL(<parameter>)

Can be used with: `record` or `set`.

Description: If `PADDALL` is specified, the padding parameter specified for a whole `record` or `set` will be valid for every field of the structured type in question.

NOTE: If a different padding parameter is specified for any fields it won’t be overridden by the padding parameter specified for the record.

Examples:
[source]
----
//Example number 1)
type record Paddedrec{
BIT3 field1,
BIT7 field2
} with { variant "PADDING(yes)"}
const Paddedrec c_myrec:={
field1:=’101’B,
field2:=’0110100’B
}

//The encoding will have the following result:
// 10100101
// 00000001
// ˆ the padding bits
//The encoding will result in the octetstring ’A501’O

//Example number 2)

type record Padddd{
BIT3 field1,
BIT7 field2
} with { variant "PADDING(yes), PADDALL"}

const Padddd c_myrec:={
field1:=’101’B,
field2:=’0110100’B
}

//The encoding will have the following result:
// 00000101
// 00110100
// ˆ the padding bits

//The encoding will result in the octetstring ’0534’O

//Example number 3)

type record Padded{
BIT3 field1,
BIT5 field2,
BIT7 field3
} with { variant "PADDING(yes), PADDALL"}

const Padded c_ourrec:={
field1:=’101’B,
field2:=’10011’B,
field3:=’0110100’B
}

//The encoding will have the following result:
// 00000101
// 00010011
// 00110100
// ˆ the padding bits

//The encoding will result in the octetstring ’051334’O

//Example number 4): field1 shouldn’t be padded

type record Paddd{
BIT3 field1,
BIT5 field2,
BIT7 field3
} with { variant "PADDING(yes), PADDALL";
variant (field1) "PADDING(no)" }
const Paddd c_myrec:={
field1:=’101’B,
field2:=’10011’B,
field3:=’0110100’B
}

//The encoding will have the following result:
// 10011101 < field1 is not padded!!!
// 00110100
// ˆ the padding bit
//The encoding will result in the octetstring ’9D34’O
----

*PREPADDING*

Attribute syntax: `PREPADDING(<parameter>)`

Parameters allowed:

* `no`
* `yes`
* `octet`
* `nibble`
* `word16`
* `dword32`
* integer to specify the padding unit and allow padding.

Default value: none

Can be used with: any type.

Description: This attribute specifies that an encoded type shall *start* at a boundary fixed by a multiple of padding unit bits counted from the beginning of the message. The default padding unit is 8 bits. If `PREPADDING` is set to `yes`, then unused bits of the last octets of the previous encoded type will be filled with padding pattern and the actual field starts at octet boundary. If `PREPADDING` is set to `no`, the remaining bits of the last octet will be used by the field. If padding unit specified, then the unused bits between the end of the last field and the next padding position will be filled with padding pattern and the actual field starts at from this point.

NOTE: It is possible to use different padding for every field of structured types. The padding unit defined by `PADDING` and `PREPADDING` attributes can be different for the same type.

Examples:
[source]
----
//Example number 1)

type BIT8 bit8padded with { variant "PREPADDING(yes)"}
type record PDU{
BIT3 field1,
bit8padded field2
} with {variant ""}
const PDU c_myPDU:={
field1:=’101’B,
field2:=’10010011’B
}

//The encoding will have the following result:
// 00000101
// 10010011
//The padding bits are indicated in bold
//The encoding will result in the octetstring ’0593’O
//Example number 2): padding to 32 bits

type BIT8 bit8padded_dw with { variant "PREPADDING(dword32)"}
type record PDU{
BIT3 field1,
bit8padded_dw field2
} with {variant ""}
const PDU myPDU:={
field1:=’101’B,
field2:=’10010011’B
}

//The encoding will have the following result:
// 00000101
// 00000000
// 00000000
// 00000000
// 10010011

//The padding bits are indicated in bold

//The encoding will result in the octetstring ’0500000093’O
----

==== Attributes of Length and Pointer Field

This section describes the coding attributes of fields containing length information or serving as pointer within a `record`.

The length and pointer fields must be of TTCN–3 `integer` type and must have fixed length.

The attributes described in this section are applicable to fields of a `record`.

*LENGTHTO*

Attribute syntax: `LENGTHTO(<parameter>) [ (`+' | `-') <offset> ]`

Parameters allowed: list of TTCN–3 field identifiers

Parameter value: any field name

Offset value: positive integer

Default value: none

Can be used with: fields of a `record`.

Description: The encoder is able to calculate the encoded length of one or several fields and put the result in another field of the same record. Consider a record with the fields `lengthField`, `field1`, `field2` and `field3`. Here `lengthField` may contain the encoded length of either one field (for example, `field2`), or sum of the lengths of multiple fields ((for example, that of `field2` + `field3`). The parameter is the field identifier (or list of field identifiers) of the `record` to be encoded.

If the offset is present, it is added to or subtracted from (the operation specified in the attribute is performed) the calculated length during encoding. During decoding, the offset is subtracted from or added to (the opposite operation to the one specified in the attribute is performed) the decoded value of the length field.

Kristof Szabados's avatar
Kristof Szabados committed
2905
NOTE: The length is expressed in units defined by the attribute UNIT The default unit is octet. The length field should be a TTCN–3 `integer` or `union` type. Special union containing only integer fields can be used for variable length field. It must not be used with `LENGTHINDEX`. The length field can be included in to the sum of the lengths of multiple fields (e.g. `lengthField` + `field2` + `field3`). The `union` field is NOT selected by the encoder. So the suitable field must be selected before encoding! The fields included in the length computing need not be continuous.
Elemer Lelik's avatar
Elemer Lelik committed
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135

Examples:
[source]
----
//Example number 1)
type record Rec {
INT1 len,
OCT3 field1,
octetstring field2
}

with {
variant (len) “LENGTHTO(field1);
variant (len) "UNIT(bits)"
}

//Example number 2)

type record Rec2 {
INT1 len,
OCT3 field1,
octetstring field2
}

with {
variant (len) “LENGTHTO(len, field1, field2)
}

//Example number 3)

type record Rec3 {
INT1 len,
OCT3 field1,
OCT1 field2
octetstring field3
}

with {
variant (len) “LENGTHTO(field1, field3)
// field2 is excluded!
}

//Example number 4): using union as length field
type union length_union{
integer short_length_field,
integer long_length_field
} with {
variant (short_length_field) "FIELDLENGTH(7)";
variant (long_length_field) "FIELDLENGTH(15)";
}

type record Rec4{
BIT1 flag,
length_union length_field,
octetstring data
} with {
variant (length_field)
“CROSSTAG(short_length_field, flag = ’0’B
long_length_field, flag = ’1’B)“;
variant (length_field) "LENGTHTO(data)"
}

//Const for short data. Data is shorter than 127 octets:

const Rec4(octetstring oc):={
flag :=’0’B,
length_field:={short_length_field:=0},
data := oc
}

//Const for long data. Data is longer than 126 octets:

const Rec4(octetstring oc):={
flag :=’1’B,
length_field:={long_length_field:=0},
data := oc
}

//Example number 5): with offset
type record Rec5 {
integer len,
octetstring field
}

with {
variant (len) "LENGTHTO(field) + 1"
}

// { len := 0, field := '12345678'O } would be encoded into '0512345678'O
// (1 is added to the length of `field')
// and '0512345678'O would be decoded into { len := 4, field := '12345678'O }
// (1 is subtracted from the decoded value of `len')

//Example number 6): with offset

type record Rec6 {
integer len,
octetstring field
}

with {
variant (len) "LENGTHTO(field) - 2"
}

// { len := 0, field := '12345678'O } would be encoded into '0212345678'O
// (1 is added to the length of `field')
// and '0212345678'O would be decoded into { len := 4, field := '12345678'O }
// (1 is subtracted from the decoded value of `len')
----

*LENGTHINDEX*

Attribute syntax: `LENGTHINDEX(<parameter>)`

Parameters allowed: TTCN–3 field identifier

Allowed values: any nested field name

Default value: none

Can be used with: fields of a `record`.

Description: This attribute extends the `LENGTHTO` attribute with the identification of the nested field containing the length value within the field of the corresponding `LENGTHTO` attribute.

Comment: See also the description of the `LENGTHTO` attribute.
NOTE: The field named by `LENGTHINDEX` attribute should be a TTCN–3 integer type.

Example (see also example of `LENGTHTO` attribute).
[source]
----
type integer INT1
with {
variant "FIELDLENGTH(8)"
}

type record InnerRec {
INT1 length
}

with { variant "" }
type record OuterRec {
InnerRec lengthRec,
octetstring field
}

with {
variant (lengthRec) "LENGTHTO(field)";
variant (lengthRec) "LENGTHINDEX(length)"
}
----

*POINTERTO*

Attribute syntax: `POINTERTO(<parameter>)`

Parameters allowed: TTCN–3 field identifier

Default value: none

Can be used with: fields of a `record`.

Description: Some record fields contain the distance to another encoded field. Records can be encoded in the form of: `ptr1`, `ptr2`, `ptr3`, `field1`, `field2`, `field3`, where the position of fieldN within the encoded stream can be determined from the value and position of field ptrN. The distance of the pointed field from the base field will be `ptrN` * `UNIT` + `PTROFFSET`. The default base field is the pointer itself. The base field can be set by the PTROFFSET attribute. When the pointed field is optional, the pointer value 0 indicates the absence of the pointed field.

Comment: See also the description of `UNIT` (0) and `PTROFFSET` (0) attributes.
NOTE: Pointer fields should be TTCN–3 `integer` type.

Examples:
[source]
----
type record Rec {
INT1 ptr1,
INT1 ptr2,
INT1 ptr3,
OCT3 field1,
OCT3 field2,
OCT3 field3
}

with {
variant (ptr1) "POINTERTO(field1)";
variant (ptr2) "POINTERTO(field2)";
variant (ptr3) "POINTERTO(field3)"
}

const Rec c_rec := {
ptr1 := <any value>,
ptr2 := <any value>,
ptr3 := <any value>,
field1 := ’010203’O,
field2 := ’040506’O,
field3 := ’070809’O
}

//Encoded c_rec: ’030507010203040506070809’O//The value of ptr1: 03
//PTROFFSET and UNIT are not set, so the default (0) is being //using.
//The starting position of ptr1: 0th bit
//The starting position of field1= 3 * 8 + 0 = 24th bit.
----

*PTROFFSET*

Attribute syntax: `PTROFFSET(<parameter>)`

Parameters allowed: `integer`, TTCN–3 field identifier

Default value: 0

Can be used with: fields of a `record`.

Description: This attribute specifies where the pointed field area starts and the base field of pointer calculating. The distance of the pointed field from the base field will equal `ptr_field * UNIT + PTROFFSET`.

Comment: It can be specified a base field and pointer offset for one field. See also the description of the attributes `POINTERTO` (0) and `UNIT` (0).

Examples:
[source]
----
type record Rec {
INT2 ptr1,
INT2 ptr2
OCT3 field1,
OCT3 field2
}

with {
variant (ptr1) "POINTERTO(field1)";
variant (ptr1) "PTROFFSET(ptr2)";
variant (ptr2) "POINTERTO(field2)";
variant (ptr2) "PTROFFSET(field1)"
}

Kristof Szabados's avatar
Kristof Szabados committed
3136
//In the example above the distance will not include//the pointer itself.
Elemer Lelik's avatar
Elemer Lelik committed
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
----

*UNIT*

Attribute syntax: `UNIT(<parameter>)`

Parameters allowed:

* bits
* octets
* nibble
* word16
* dword32
* elements
* integer

Default value: octets

Can be used with: fields of a `record`.

Description: `UNIT` attribute is used in conjunction with the `LENGTHTO` (0) or `POINTERTO` (0) attributes. Length indicator fields may contain length expressed in indicated number of bits.

Comment: See also description of the `LENGTHTO` and `POINTERTO` attributes. The elements can be used with `LENGTHTO` only if the length field counts the number of elements in a `record`/`set` of field.

Examples:
[source]
----
//Example number 1): measuring length in 32 bit long units
type record Rec {
INT1 length,
octetstring field
}

with {
variant (length) "LENGTHTO(field)";
variant (length) "UNIT(dword32)"
}

//Example number 2): measuring length in 2 bit long units
type record Rec {
INT1 length,
octetstring field
}

with {
variant (length) "LENGTHTO(field)";
variant (length) "UNIT(2)"
}

//Example number 3): counting the number of elements of record of field
type record of BIT8 Bitrec
type record Rec{
integer length,
Bitrec data
}

with{
variant (length) "LENGTHTO(data)";
variant (length) "UNIT(elements)"
}
----

==== Attributes to Identify Fields in Structured Data Types

This section describes the coding attributes which during decoding identifies the fields within structured data types such as record, set or union.

*PRESENCE*

Attribute syntax: `PRESENCE(<parameter>)`

Parameters allowed: a `presence_indicator` expression (see Description)

Default value: none

Can be used with: `optional` fields of a `record` or `set`.

Description: Within records some fields may indicate the presence of another optional field. The attribute `PRESENCE` is used to describe these cases. Each optional field can have a `PRESENCE` definition. The syntax of the `PRESENCE` attribute is the following: a `PRESENCE` definition is a presence_indicator expression. `Presence_indicators` are of form `<key> = <constant> or {<key1> = <constant1>, <key2> = <constant2>, … <keyN> = <constantN>}` where each key is a field(.nestedField) of the `record`, `set` or `union` and each constant is a TTCN–3 constant expression (for example, `22`, `’25’O` or `’1001101’B`).

NOTE: The PRESENCE attribute can identify the presence of the whole record. In that case the field reference must be omitted.

Examples:
[source]
----
type record Rec {
BIT1 presence,
OCT3 field optional
}

with {
variant (field) "PRESENCE(presence = ’1’B)"
}

type record R2{
OCT1 header,
OCT1 data
} with {variant "PRESENCE(header=’11’O)"}
----

*TAG*

Attribute syntax: `TAG(<parameter>)`

Parameters allowed: list of `field_identifications` (see Description)

Default value: none

Can be used with: `record` or `set`.

Description: The purpose of the attribute `TAG` is to identify specific values in certain fields of the `set`, `record` elements or `union` choices. When the `TAG` is specified to a `record` or a `set`, the presence of the given element can be identified at decoding. When the `TAG` belongs to a `union`, the union member can be identified at decoding. The attribute is a list of `field_identifications`. Each `field_identification` consists of a record, set or union field name and a `presence_indicator` expression separated by a comma (,). `Presence_indicators` are of form `<key> = <constant>` or `{ <key1> = <constant1>, <key2> = <constant2>, … <keyN> = <constantN> }` where each key is a field(.nestedField) of the `record`, `set` or `union` and each constant is a TTCN–3 constant expression (e.g.` 22`, `’25’O` or `’1001101’B`).There is a special presence_indicator: `OTHERWISE`. This indicates the default union member in a union when the TAG belongs to union.

Kristof Szabados's avatar
Kristof Szabados committed
3247
NOTE: `TAG` works on non-optional fields of a record as well. It is recommended to use the attributes `CROSSTAG` or `PRESENCE` leading to more effective decoding.
Elemer Lelik's avatar
Elemer Lelik committed
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417

Examples:
[source]
----
//Example number 1): set
type record InnerRec {
INT1 tag,
OCT3 field
}

with { variant "" }
type set SomeSet {
InnerRec field1 optional,
InnerRec field2 optional,
InnerRec field3 optional
}

with {
variant “TAG(field1, tag = 1;
field2, tag = 2;
field3, tag = 3)"
}

//Example number 2): union
type union SomeUnion {
InnerRec field1,
InnerRec field2,
InnerRec field3
}

with {
variant “TAG(field1, tag = 1;
field2, tag = 2;
field3, OTHERWISE)"
}

If neither tag=1 in field1 nor tag=2 in field2 are matched, field3 is selected.

//Example number 3): record
type record MyRec{
OCT1 header,
InnerRec field1 optional
}

with{
variant (field1) "TAG(field1, tag = 1)"
}

//field1 is present when in field1 tag equals 1.
----

*CROSSTAG*

Attribute syntax: `CROSSTAG(<parameter>)`

Parameters allowed: list of union "field_identifications" (see Description)

Default value: none

Can be used with: `union` fields of `records`.

Description: When one field of a `record` specifies the union member of another field of a record, CROSSTAG definition is used. The syntax of the CROSSTAG attribute is the following. Each union field can have a `CROSSTAG` definition. A `CROSSTAG` definition is a list of union `field_identifications`. Each `field_identification` consists of a union field name and a `presence_indicator` expression separated by a comma (,). `Presence_indicators` are of form `<key> = <constant>` or `{<key1> = <constant1>`, `<key2> = <constant2>`, `… <keyN> = <constantN>}` where each key is a field(.nestedField) of the `record`, `set` or `union` and each constant is a TTCN–3 constant expression (for example, `22`, `’25’O` or `’1001101’B`).There is a special `presence_indicator`: `OTHERWISE`. This indicates the default union member in union.

NOTE: The difference between the `TAG` and `CROSSTAG` concept is that in case of `TAG` the field identifier is inside the field to be identified. In case of `CROSSTAG` the field identifier can either precede or succeed the union field it refers to. If the field identifier succeeds the union, they must be in the same record, the union field must be mandatory and all of its embedded types must have the same fixed size.

Examples:
[source]
----
type union AnyPdu {
PduType1 type1,
PduType2 type2,
PduType3 type3
}

with { variant "" }
type record PduWithId {
INT1 protocolId,
AnyPdu pdu
}

with {
variant (pdu) “CROSSTAG( type1, { protocolId = 1,
protocolId = 11 };
type2, protocolId = 2;
type3, protocolId = 3)"
}
----

*REPEATABLE*

Attribute syntax: `REPEATABLE(<parameter>)`

Parameters allowed: `yes`, `no`

Default value: none

Can be used with: `record/set` of type fields of a `set`.

Description: The element of the set can be in any order. The `REPEATABLE` attribute controls whether the element of the `record` or `set` `of` can be mixed with other elements of the `set` or they are grouped together.

NOTE: It has no effect during encoding.

Examples:
[source]
----
// Three records and a set are defined as follows:

type record R1{
OCT1 header,
OCT1 data
} with {variant "PRESENCE(header=’AA’O)"}

type record of R1 R1list;

type record R2{
OCT1 header,
OCT1 data
} with {variant "PRESENCE(header=’11’O)"}

type record R3{
OCT1 header,
OCT1 data
} with {variant "PRESENCE(header=’22’O)"}

type set S1 {
R2 field1,
R3 field2,
R1list field3
}

with {variant (field3) "REPEATABLE(yes)"}

//The following encoded values have equal meaning:
// (The value of R1 is indicated in bold.)
//example1: 1145**AA01AA02AA03**2267
//example2: 114**5AA01**2267**AA02AA03**
//example3: **AA01**2267**AA02**1145*AA03*

The decoded value of S1 type:

{
field1:={
header:=’11’O,
data:=’45’O
},

field2:={
header:=’22’O,
data:=’67’O
},

field3:={
{header:=’AA’O,data:=’01’O},
{header:=’AA’O,data:=’02’O},
{header:=’AA’O,data:=’03’O}
}
}

type set S2 {
R2 field1,
R3 field2,
R1list field3
}

with {variant (field3) "REPEATABLE(no)"}

//Only the example1 is a valid encoded value for S2, because
//the elements of the field3 must be groupped together.
----

3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
*FORCEOMIT*

Attribute syntax: `FORCEOMIT(<parameter>)`

Parameters allowed: list of TTCN-3 field identifiers (can also be n