From 32710ec42908c1ef5064203ccde07c6d1d7125fc Mon Sep 17 00:00:00 2001
From: balaskoa <Jeno.Balasko@ericsson.com>
Date: Thu, 26 Mar 2020 09:06:01 +0100
Subject: [PATCH] Editorial change in adocs: Opening/closing quotes have been
 replaced by quotes

Signed-off-by: balaskoa <Jeno.Balasko@ericsson.com>
Change-Id: Ieca873b49ffd110d4ef7d1fadced70915025ec6f
---
 ...clarifications_to_the_ttcn-3_standard.adoc |    8 +-
 .../4-ttcn3_language_extensions.adoc          | 2021 +++++++++--------
 2 files changed, 1017 insertions(+), 1012 deletions(-)

diff --git a/usrguide/referenceguide/3-clarifications_to_the_ttcn-3_standard.adoc b/usrguide/referenceguide/3-clarifications_to_the_ttcn-3_standard.adoc
index 33f52a169..f37f9f906 100644
--- a/usrguide/referenceguide/3-clarifications_to_the_ttcn-3_standard.adoc
+++ b/usrguide/referenceguide/3-clarifications_to_the_ttcn-3_standard.adoc
@@ -68,7 +68,7 @@ The standard does not specify clearly some of the encoding rules.
 
 * The encoding of fields in `record`, `set` and `union` types is supported.
 * The order of attributes of the same type in a `with` statement is important. The second variant might override the first, or an overriding attribute will override all the following attributes of the same type.
-* Encode attributes are an exception to this as they are not really attributes, but "contexts". It cannot be determined to which encode "contexts" the variants of the same `with` statement should belong if there are several. As having several encode "contexts" in the same `with` statement would be a bad coding practice, a warning is generated and the last encode is used as the statement’s encode "contexts".
+* Encode attributes are an exception to this as they are not really attributes, but "contexts". It cannot be determined to which encode "contexts" the variants of the same `with` statement should belong if there are several. As having several encode "contexts" in the same `with` statement would be a bad coding practice, a warning is generated and the last encode is used as the statement's encode "contexts".
 * As encodes are contexts, an encode is only overridden if the overriding context is not the same.
 * The order of attributes of different type in a `with` statement is not important, they do not affect each other.
 * In case of structured types, the encode context of the type is the encode context of its fields too, if the fields do not override this attribute. The other attribute types are handled separately for the structured type and its fields. Attributes inherited from higher level (module/group/structured type) might change the encoding of a record and that of its fields.
@@ -170,9 +170,9 @@ This isn't the case in the TITAN runtime. Values only have 2 states: _bound_ and
 * `record` / `set`: unbound = uninitialized, bound = at least partially initialized, meaning that a `record` / `set` is bound if at least one of its fields is boundfootnote:[The bound state of fields or elements is also determined by using the isbound operation on the field or element.];
 * `record of` / `set of`: unbound = uninitialized, bound = at least partially initialized, meaning that the record of is only unbound if it has never received an initial value (even initializing with {} creates a bound `record of` / `set of` value);
 * `array`: unbound = uninitialized or partially initialized, bound = fully initialized, meaning that the array is only bound if all of its elements are bound;
-* `unions` can't be partially initialized, so TITAN stores their bound state correctly (although it’s still possible to create `union` values, where the selected alternative is unbound, with the legacy command line option `–B`; these values would be considered bound by TITAN).
+* `unions` can't be partially initialized, so TITAN stores their bound state correctly (although it's still possible to create `union` values, where the selected alternative is unbound, with the legacy command line option `–B`; these values would be considered bound by TITAN).
 
-There is a workaround in TITAN’s implementation of `records` / `sets` to allow the copying of partially initialized values (`union` values with unbound selected alternatives can also be copied when the compiler option `–B` is set). In all other cases the user is responsible for making sure the value is usable on the right hand side of an operation. The `isbound` function is usually not enough to ensure, that the value is usable.
+There is a workaround in TITAN's implementation of `records` / `sets` to allow the copying of partially initialized values (`union` values with unbound selected alternatives can also be copied when the compiler option `–B` is set). In all other cases the user is responsible for making sure the value is usable on the right hand side of an operation. The `isbound` function is usually not enough to ensure, that the value is usable.
 
 == Concatenation of templates
 
@@ -206,5 +206,5 @@ var IntList vl_myList := { 1, 2, 3 };
 var IntList vl_emptyList := {};
 replace(vl_myList, 1, 2, vl_emptyList); // returns { 1 }
 replace("abcdef", 2, 1, ""); // returns "abdef"
-replace(‘12FFF’H, 3, 2, ‘’H); // returns ‘12F’H
+replace('12FFF'H, 3, 2, ''H); // returns '12F'H
 ----
diff --git a/usrguide/referenceguide/4-ttcn3_language_extensions.adoc b/usrguide/referenceguide/4-ttcn3_language_extensions.adoc
index 7c227ea5a..fed507e74 100644
--- a/usrguide/referenceguide/4-ttcn3_language_extensions.adoc
+++ b/usrguide/referenceguide/4-ttcn3_language_extensions.adoc
@@ -30,7 +30,7 @@ The following table summarizes all supported escape sequences of TTCN–3 charac
 | 11 |vertical tabulator |
 |\ |92 |backslash
 |&quot; |34 |quotation mark
-|’ |39 |apostrophe
+|' |39 |apostrophe
 |? |63 |question mark
 | <newline> |nothing |line continuation
 | |NNN |octal notation (NNN is the character code in at most 3 octal digits)
@@ -341,7 +341,7 @@ In case of timers the name of the timer, the default duration, the current state
 
 The compiler allows starting TTCN–3 functions having return type on PTCs. Those functions must have the appropriate `runs on` clause. If such a function terminates normally on the PTC, the returned value can be matched and retrieved in a `done` operation.
 
-According to the TTCN-3 standard, the value redirect in a `done` operation can only be used to store the local verdict on the PTC that executed the behavior function. In TITAN the value redirect can also be used to store the behavior function’s return value with the help of an optional template argument.
+According to the TTCN-3 standard, the value redirect in a `done` operation can only be used to store the local verdict on the PTC that executed the behavior function. In TITAN the value redirect can also be used to store the behavior function's return value with the help of an optional template argument.
 
 If this template argument is present, then the compiler treats it as a value returning done operation, otherwise it is treated as a verdict returning `done`.
 
@@ -376,7 +376,7 @@ function ptcBehavior() runs on MyCompType return MyReturnType
   return 123;
 }
 
-// value returning ‘done’
+// value returning 'done'
 testcase myTestCase() runs on AnotherCompType
 {
   var MyReturnType myVar;
@@ -386,7 +386,7 @@ testcase myTestCase() runs on AnotherCompType
   // myVar will contain 123
 }
 
-// verdict returning ‘done’
+// verdict returning 'done'
 testcase myTestCase2() runs on AnotherCompType
 {
   var verdicttype myVar;
@@ -416,7 +416,7 @@ external function MyExtFunction() return template octetstring;
 The compiler accepts template module parameters by inserting an optional "template" keyword into the standard modulepar syntax construct between the modulepar keyword and the type reference. The extended BNF rule:
 
 [source,subs="+quotes"]
-ModuleParDef ::= "modulepar" (ModulePar | (“{“MultiTypedModuleParList "}"))ModulePar ::= *["template"]* Type ModuleParList
+ModuleParDef ::= "modulepar" (ModulePar | ("{"MultiTypedModuleParList "}"))ModulePar ::= *["template"]* Type ModuleParList
 
 Example:
 
@@ -447,9 +447,9 @@ template R tr_2 := {1, *, (2, 3) }
 template R tr_3 := { 1, *, 10 } length(5)
 template R tr_4 := { 1, 2, 3, * } length(1..2)
 template S tr_5 := { f1 := (0..99), f2 := omit, f3 := ? }
-template S tr_6 := { f3 := *, f1 := 1, f2 := ’00’B ifpresent }
+template S tr_6 := { f3 := *, f1 := 1, f2 := '00'B ifpresent }
 template S tr_7 := ({ f1 := 1, f2 := omit, f3 := "ABC" },
-                  { f1 := 2, f3 := omit, f2 := ’1’B })
+                  { f1 := 2, f3 := omit, f2 := '1'B })
 template S tr_8 := ?
 
 //sizeof(tr_1) → 4
@@ -550,7 +550,7 @@ NOTE: This function is the reverse of the standardized `bit2str`.
 Example:
 
 [source]
-str2bit ("1011011100") = ’1011011100’B
+str2bit ("1011011100") = '1011011100'B
 
 === `str2hex`
 
@@ -562,7 +562,7 @@ Example:
 
 [source]
 ----
-str2hex ("1D7") = ’1D7’H
+str2hex ("1D7") = '1D7'H
 ----
 
 === float2str
@@ -598,7 +598,7 @@ Syntax:
 [source]
 log2str (…) return charstring
 
-This function can be parameterized in the same way as the `log` function, it returns a charstring value which contains the log string for all the provided parameters, but it does not contain the timestamp, severity and call stack information, thus the output does not depend on the runtime configuration file. The parameters are interpreted the same way as they are in the log function: their string values are identical to what the log statement writes to the log file. The extra information (timestamp, severity, call stack) not included in the output can be obtained by writing external functions which use the runtime’s Logger class to obtain the required data.
+This function can be parameterized in the same way as the `log` function, it returns a charstring value which contains the log string for all the provided parameters, but it does not contain the timestamp, severity and call stack information, thus the output does not depend on the runtime configuration file. The parameters are interpreted the same way as they are in the log function: their string values are identical to what the log statement writes to the log file. The extra information (timestamp, severity, call stack) not included in the output can be obtained by writing external functions which use the runtime's Logger class to obtain the required data.
 
 === `testcasename`
 
@@ -649,13 +649,13 @@ Syntax:
 [source]
 ttcn2string(in <TemplateInstance> ti) return charstring
 
-This predefined function returns its parameter’s value in a string which is in TTCN-3 syntax. The returned string has legal ttcn-3 with a few exceptions such as unbound values. Unbound values are returned as “-“, which can be used only as fields of assignment or value list notations, but not as top level assignments (e.g. `x:=- is illegal`). Differences between the output format of `ttcn2string()` and `log2str()`:
+This predefined function returns its parameter's value in a string which is in TTCN-3 syntax. The returned string has legal ttcn-3 with a few exceptions such as unbound values. Unbound values are returned as "-", which can be used only as fields of assignment or value list notations, but not as top level assignments (e.g. `x:=- is illegal`). Differences between the output format of `ttcn2string()` and `log2str()`:
 
 [cols=",,",options="header",]
 |===
 |Value/template |`log2str()` |`ttcn2string()`
-|Unbound value |`"<unbound>"` |“-“
-|Uninitialized template |`"<uninitialized template>"` |“-“
+|Unbound value |`"<unbound>"` |"-"
+|Uninitialized template |`"<uninitialized template>"` |"-"
 |Enumerated value |`name (number)` |name
 |===
 
@@ -680,7 +680,7 @@ var template MyRecord my_rec
   log(my_rec)
   }
   @catch (err_str) {
-    log(“string2ttcn() failed: “, err_str)
+    log("string2ttcn() failed: ", err_str)
   }
 
 The log output will look like this:
@@ -743,7 +743,7 @@ Example:
 
 [source]
 ----
-json2cbor("{"a":1,"b":2}") == ‘A2616101616202’O
+json2cbor("{"a":1,"b":2}") == 'A2616101616202'O
 ----
 
 === `cbor2json`
@@ -759,7 +759,7 @@ The function `cbor2json(in octetstring os) return universal charstring` converts
 Example:
 [source]
 ----
-cbor2json(‘A2616101616202’O) == "{"a":1,"b":2}"
+cbor2json('A2616101616202'O) == "{"a":1,"b":2}"
 ----
 
 === `json2bson`
@@ -775,7 +775,7 @@ The function `json2bson(in universal charstring us) return octetstring` converts
 Example:
 [source]
 ----
-json2bson("{"a":1,"b":2}") == ‘13000000106100010000001062000200000000’O
+json2bson("{"a":1,"b":2}") == '13000000106100010000001062000200000000'O
 ----
 
 === `bson2json`
@@ -791,7 +791,7 @@ The function `bson2json(in octetstring os) return universal charstring` converts
 Example:
 [source]
 ----
-bson2json(‘13000000106100010000001062000200000000’O) == "{"a":1,"b":2}"
+bson2json('13000000106100010000001062000200000000'O) == "{"a":1,"b":2}"
 ----
 
 == Exclusive Boundaries in Range Subtypes
@@ -801,7 +801,7 @@ The boundary values used to specify range subtypes can be preceded by an exclama
 [[special-float-values-infinity-and-not-a-number]]
 == Special Float Values Infinity and not_a_number
 
-The keyword infinity (which is also used to specify value range and size limits) can be used to specify the special float values –infinity and +infinity, these are equivalent to MINUS-INFINITY and PLUS-INFINITY used in ASN.1. A new keyword not_a_number has been introduced which is equivalent to NOT-A-NUMBER used in ASN.1. The -infinity and +infinity and not_a_number special values can be used in arithmetic operations. If an arithmetic operation’s operand is not_a_number then the result of the operation will also be not_a_number. The special value not_a_number cannot be used in a float range subtype because it’s an unordered value, but can be added as a single value, for example subtype (0.0 .. infinity, not_a_number) contains all positive float values and the not_a_number value.
+The keyword infinity (which is also used to specify value range and size limits) can be used to specify the special float values –infinity and +infinity, these are equivalent to MINUS-INFINITY and PLUS-INFINITY used in ASN.1. A new keyword not_a_number has been introduced which is equivalent to NOT-A-NUMBER used in ASN.1. The -infinity and +infinity and not_a_number special values can be used in arithmetic operations. If an arithmetic operation's operand is not_a_number then the result of the operation will also be not_a_number. The special value not_a_number cannot be used in a float range subtype because it's an unordered value, but can be added as a single value, for example subtype (0.0 .. infinity, not_a_number) contains all positive float values and the not_a_number value.
 
 [[ttcn-3-preprocessing]]
 == TTCN–3 Preprocessing
@@ -1007,7 +1007,7 @@ The following rules apply to macros:
 * All macros except `%testcaseId` are evaluated during compilation and they can be used anywhere in the TTCN–3 module.
 * Macro `%testcaseId` is evaluated at runtime. It can be used only within functions and altsteps that are being run on test components (on the MTC or PTCs) and within testcases. It is not allowed to use macro `%testcaseId` in the module control part. If a function or altstep that contains macro `%testcaseId` is called directly from the control part the evaluation of the macro results in a dynamic test case error.
 * The result of macro `%testcaseId` is not a constant thus it cannot be used in the value of TTCN–3 constants. It is allowed only in those contexts where TTCN–3 variable references are permitted.
-* Macro `%definitionId` is always substituted with the name of the top-level module definition that it is used in. <<13-references.adoc#_15, [15]>> For instance, if the macro appears in a constant that is defined within a function then the macro will be substituted with the function’s name rather than the one of the constant. When used within the control part macro `%definitionId` is substituted with the word "`control`".
+* Macro `%definitionId` is always substituted with the name of the top-level module definition that it is used in. <<13-references.adoc#_15, [15]>> For instance, if the macro appears in a constant that is defined within a function then the macro will be substituted with the function's name rather than the one of the constant. When used within the control part macro `%definitionId` is substituted with the word "`control`".
 * Macro `%fileName` is substituted with the name of the source file in the same form as it was passed to the compiler. This can be a simple file name, a relative or an absolute path name.
 * The result of macro `%lineNumber` is always a string that contains the current line number as a decimal number. Numbering of lines starts from 1. All lines of the input file (including comments and empty lines) are counted. When it needs to be used in an integer expression a conversion is necessary: `str2int(%lineNumber)`. The above expression is evaluated during compilation without any runtime performance penalty.
 * Source line markers are considered when evaluating macros `%fileName` and `%lineNumber`. In preprocessed TTCN–3 modules the macros are substituted with the original file name and line number that the macro comes from provided that the preprocessor supports it.
@@ -1038,9 +1038,9 @@ const charstring c_MyInvalidConst := %testcaseId;
 // this is valid, of course
 var charstring v_MyLocalVar := %testcaseId;
 // the two log commands below give different output in the log file
-log("function:", %definitionId, " testcase: “, %testcaseId);
+log("function:", %definitionId, " testcase: ", %testcaseId);
 // printout: function: f_MyFunction testcase: tc_MyTestcase
-log("function:", c_MyLocalConst1, " testcase: “, v_MyLocalVar);
+log("function:", c_MyLocalConst1, " testcase: ", v_MyLocalVar);
 // printout: function: "f_MyFunction" testcase: "tc_MyTestcase"
 }
 }
@@ -1376,7 +1376,7 @@ Differences from the legacy method:
 * the parameters `encoding_info/decoding_info` and `dynamic_encoding` of predefined functions `encvalue`, `decvalue`, `encvalue_unichar` and `decvalue_unichar` are supported (the `dynamic_encoding` parameter can be used for choosing the codec to use for values of types with multiple encodings; the `encoding_info`/`decoding_info` parameters are currently ignored);
 * the `self.setencode` version of the `setencode` operation is supported (it can be used for choosing the codec to use for types with multiple encodings within the scope of the current component);
 * the `@local` modifier is supported for `encode` attributes;
-* a type’s the default codec (used by `decmatch` templates, the @decoded modifier, and the predefined functions `encvalue`, `decvalue`, `encvalue_unichar` and `decvalue_unichar` when no dynamic encoding parameter is given) is:
+* a type's the default codec (used by `decmatch` templates, the @decoded modifier, and the predefined functions `encvalue`, `decvalue`, `encvalue_unichar` and `decvalue_unichar` when no dynamic encoding parameter is given) is:
 * its one defined codec, if it has exactly one codec defined; or
 * unspecified, if it has multiple codecs defined (the mentioned methods of encoding/decoding can only be used in this case, if a codec was selected for the type using `self.setencode`).
 
@@ -1450,7 +1450,7 @@ The TITAN runtime does not directly call these external functions, they simply i
 
 These external functions can be declared with any prototype, and with the regular stream type of either `octetstring` or `charstring` (even though `encvalue` and `decvalue` have `bitstring` streams).
 
-The ASN.1 type cannot have several external encoder or decoded functions of different (built-in or PER) encoding types, as this way the compiler won’t know which encoding to use. Multiple encoder or decoder functions of the same encoding type can be declared for one type.
+The ASN.1 type cannot have several external encoder or decoded functions of different (built-in or PER) encoding types, as this way the compiler won't know which encoding to use. Multiple encoder or decoder functions of the same encoding type can be declared for one type.
 
 NOTE: These requirements are only checked if there is at least one `encvalue`, `decvalue`, `decmatch` template or decoded parameter or value redirect in the compiled modules. They are also checked separately for encoding and decoding (meaning that, for example, multiple encoder functions do not cause an error if only `decvalues` are used in the modules and no `encvalues`). +
 The compiler searches all modules when attempting to find the coder functions needed for a type (including those that are not imported to the module where the encvalue, decvalue, decmatch or @decoded is located).
@@ -1480,7 +1480,7 @@ The predefined functions `encvalue` and `decvalue` can be used to encode and dec
 
 These functions must have the `encode`/`decode` and `prototype` extension attributes, similarly to built-in encoder and decoder functions, except the name of the encoding (the string specified in the `encode` or `decode` extension attribute) must not be equal to any of the built-in encoding names (e.g. BER, TEXT, XER, etc.).
 
-The compiler generates calls to these functions whenever `encvalue` or `decvalue` is called, or whenever a matching operation is performed on a `decmatch` template, or whenever a redirected value or parameter is decoded (with the `@decoded` modifier), if the value’s type has the same encoding as the encoder or decoder function (the string specified in the type’s `encode` attribute is equivalent to the string in the external function’s `encode` or `decode` extension attribute).
+The compiler generates calls to these functions whenever `encvalue` or `decvalue` is called, or whenever a matching operation is performed on a `decmatch` template, or whenever a redirected value or parameter is decoded (with the `@decoded` modifier), if the value's type has the same encoding as the encoder or decoder function (the string specified in the type's `encode` attribute is equivalent to the string in the external function's `encode` or `decode` extension attribute).
 
 Restrictions:
 
@@ -1489,8 +1489,8 @@ Restrictions:
 * the prototype of custom decoding functions must be `sliding`
 * the stream type of custom encoding and decoding functions is `bitstring`
 
-NOTE: Although theoretically variant attributes can be added for custom encoding types, their coding functions would not receive any information about them, so they would essentially be regarded as comments. If custom variant attributes are used, the variant attribute parser’s error level must be lowered to warnings with the compiler option `-E`. +
-The compiler searches all modules when attempting to find the coder functions needed for a type (including those that are not imported to the module where the `encvalue`, `decvalue`, `decmatch` or `@decoded` is located; if this is the case, then an extra include statement is added in the generated {cpp} code to the header generated for the coder function’s module).
+NOTE: Although theoretically variant attributes can be added for custom encoding types, their coding functions would not receive any information about them, so they would essentially be regarded as comments. If custom variant attributes are used, the variant attribute parser's error level must be lowered to warnings with the compiler option `-E`. +
+The compiler searches all modules when attempting to find the coder functions needed for a type (including those that are not imported to the module where the `encvalue`, `decvalue`, `decmatch` or `@decoded` is located; if this is the case, then an extra include statement is added in the generated {cpp} code to the header generated for the coder function's module).
 
 Example:
 [source]
@@ -1527,7 +1527,7 @@ This can be achieved the same way as the custom encoder and decoder functions de
 
 This can only be done for ASN.1 types, and has the same restrictions as the custom encoder and decoder functions. There is one extra restriction when using legacy codec handling (see section <<setting-the-default-codec-for-asn-1-types, Setting the default codec for ASN.1 types>>): an ASN.1 type cannot have both a PER encoder/decoder function and an encoder/decoder function of a built-in type set (this is checked separately for encoding and decoding).
 
-NOTE: The compiler searches all modules when attempting to find the coder functions needed for a type (including those that are not imported to the module where the `encvalue`, `decvalue`, `decmatch` or `@decoded` is located; if this is the case, then an extra include statement is added in the generated {cpp} code to the header generated for the coder function’s module).
+NOTE: The compiler searches all modules when attempting to find the coder functions needed for a type (including those that are not imported to the module where the `encvalue`, `decvalue`, `decmatch` or `@decoded` is located; if this is the case, then an extra include statement is added in the generated {cpp} code to the header generated for the coder function's module).
 
 Example:
 [source]
@@ -1553,7 +1553,7 @@ All information related to implicit message encoding shall be given as `extensio
 
 * Whitespace characters (spaces, tabulators, newlines, etc.) and TTCN–3 comments are allowed anywhere in the attribute text. Attributes containing only comments, whitespace or both are simply ignored +
 Example: +
-`with { extension “/* this is a comment */" }`
+`with { extension "/* this is a comment */" }`
 * When a definition has multiple attributes, the attributes can be given either in one attribute text separated by whitespace or in separate TTCN–3 attributes. +
 Example: +
 `with { extension "address provider" }` means exactly the same as +
@@ -1678,7 +1678,7 @@ inout octetstring;
 type port PT2 message {
 out ControlRequest;
 inout PDUType1, PDUType2;
-} with { extension “user PT1
+} with { extension "user PT1
 
 out(ControlRequest -> ControlRequest: simple;
 PDUType1 -> octetstring: function(enc_PDUType1);
@@ -1797,7 +1797,7 @@ Error encountered during the encoding or decoding process are handled as defined
 
 === Rules Concerning the Encoder
 
-The encoder doesn’t modify the data to be encoded; instead, it substitutes the value of calculated fields (`length`, `pointer`, `tag`, `crosstag` and `presence` fields) with the calculated value in the encoded bitfield if necessary.
+The encoder doesn't modify the data to be encoded; instead, it substitutes the value of calculated fields (`length`, `pointer`, `tag`, `crosstag` and `presence` fields) with the calculated value in the encoded bitfield if necessary.
 
 The value of the `pointer` and `length` fields are calculated during encoding and the resulting value will be used in sending operations. During decoding, the decoder uses the received length and pointer information to determine the length and the place of the fields.
 
@@ -1838,7 +1838,7 @@ with {
 variant "BITORDERINFIELD(lsb)"
 }
 
-const BITn c_bits := ’10010110’B
+const BITn c_bits := '10010110'B
 //Encoding of c_bits gives the following result: 10010110
 
 type bitstring BITnreverse
@@ -1846,7 +1846,7 @@ with {
 variant "BITORDERINFIELD(msb)"
 }
 
-const BITnreverse c_bitsrev := ’10010110’B
+const BITnreverse c_bitsrev := '10010110'B
 //Encoding of c_bitsrev gives the following result: 01101001
 ----
 
@@ -1862,7 +1862,7 @@ Can be used with: stand-alone types or the field of a `record` or `set`.
 
 Description: This attribute specifies the type of encoding of negative integer numbers as follows: +
 `nosign`: negative numbers are not allowed; +
-`2scompl`: 2’s complement encoding; +
+`2scompl`: 2's complement encoding; +
 `signbit`: sign bit and the absolute value is coded. (Only with integer and enumerated types.)
 
 Examples:
@@ -1876,7 +1876,7 @@ variant "FIELDLENGTH(8)"
 }
 
 const INT1 c_i := -1
-//Encoded c_i: 10000001 ’81’O
+//Encoded c_i: 10000001 '81'O
 // sign bitˆ
 //Example number 2): two's complement coding
 type integer INT2 with {variant "COMP(2scompl)";
@@ -1884,7 +1884,7 @@ variant "FIELDLENGTH(8)"
 }
 
 const INT2 c_i2 := -1
-//Encoded c_i2: 11111111 ’FF’O
+//Encoded c_i2: 11111111 'FF'O
 ----
 
 *FIELDLENGTH*
@@ -1972,7 +1972,7 @@ Description: This attribute sets the `FIELDLENGTH`, `BYTEORDER` and `COMP` attri
 
 * `BYTEORDER` is set to `last`.
 * `N bit` sets `COMP` to `signbit`, while `unsigned` `N` `bit` sets `COMP` to `nosign` (its default value).
-* Depending on the encoded value’s type `FIELDLENGTH` is set to: +
+* Depending on the encoded value's type `FIELDLENGTH` is set to: +
 `integer, enumerated, bitstring, boolean:` N; +
 `octetstring, charstring:` N / 8; +
 `hexstring:` N / 4.
@@ -2077,7 +2077,7 @@ with {
 variant "BITORDER(lsb)"
 }
 
-const OCT c_oct := ’123456’O
+const OCT c_oct := '123456'O
 
 //The encoded bitfield: 01010110 00110100 00010010
 // last octet^ ^first octet
@@ -2086,7 +2086,7 @@ const OCT c_oct := ’123456’O
 // 00110100
 // 01010110
 
-//The encoding result in the octetstring ’123456’O
+//The encoding result in the octetstring '123456'O
 
 //Example number 2)
 type octetstring OCTrev
@@ -2094,7 +2094,7 @@ with {
 variant "BITORDER(msb)"
 }
 
-const OCTrev c_octr := ’123456’O
+const OCTrev c_octr := '123456'O
 
 //The encoded bitfield: 01010110 00110100 00010010
 
@@ -2105,7 +2105,7 @@ const OCTrev c_octr := ’123456’O
 // 00101100
 // 01101010
 
-//The encoding results in the octetstring ’482C6A’O
+//The encoding results in the octetstring '482C6A'O
 
 //Example number 3)
 
@@ -2113,7 +2113,7 @@ type bitstring BIT12 with {
 variant "BITORDER(lsb), FIELDLENGTH(12)"
 }
 
-const BIT12 c_bits:=’101101101010’B
+const BIT12 c_bits:='101101101010'B
 //The encoded bitfield: 1011 01101010
 
 // last octet^ ^first octet
@@ -2123,21 +2123,21 @@ The buffer will have the following content:
 // ….1011
 // ^ next field
 
-//The encoding will result in the octetstring ’6A.9’O
+//The encoding will result in the octetstring '6A.9'O
 
 //Example number 4)
 type bitstring BIT12rev with {
 variant "BITORDER(msb), FIELDLENGTH(12)"
 }
 
-const BIT12 c_BIT12rev:=’101101101010’B
+const BIT12 c_BIT12rev:='101101101010'B
 //The encoded bitfield: 1011 01101010
 // last octet^ ^first octet
 //The buffer will have the following content:
 // 01010110
 // ….1101
 // ^ next field
-//The encoding will result in the octetstring ’56.D’O
+//The encoding will result in the octetstring '56.D'O
 ----
 
 *BYTEORDER*
@@ -2168,7 +2168,7 @@ with {
 variant "BYTEORDER(first)"
 }
 
-const OCT c_oct := ’123456’O
+const OCT c_oct := '123456'O
 //The encoded bitfield: 01010110 00110100 00010010
 // last octet^ ^first octet
 
@@ -2177,14 +2177,14 @@ The buffer will have the following content:
 // 00110100
 // 01010110
 
-//The encoding will result in the octetstring ’123456’O
+//The encoding will result in the octetstring '123456'O
 
 //Example number 2)
 type octetstring OCTrev
 with {variant "BYTEORDER(last)"
 }
 
-const OCTrev c_octr := ’123456’O
+const OCTrev c_octr := '123456'O
 //The encoded bitfield: 01010110 00110100 00010010
 // last octet^ ^first octet
 
@@ -2196,13 +2196,13 @@ const OCTrev c_octr := ’123456’O
 
 // 00010010
 
-The encoding will result in the octetstring ’563412’O
+The encoding will result in the octetstring '563412'O
 //Example number 3)
 type bitstring BIT12 with {
 variant "BYTEORDER(first), FIELDLENGTH(12)"
 }
 
-const BIT12 c_bits:=’100101101010’B
+const BIT12 c_bits:='100101101010'B
 //The encoded bitfield: 1001 01101010
 // last octet^ ^first octet
 The buffer will have the following content:
@@ -2210,20 +2210,20 @@ The buffer will have the following content:
 // ….1001
 // ^ next field
 
-//The encoding will result in the octetstring ’6A.9’O
+//The encoding will result in the octetstring '6A.9'O
 //Example number 4)
 type bitstring BIT12rev with {
 variant "BYTEORDER(last), FIELDLENGTH(12)"
 }
 
-const BIT12rev c_bits:=’100101101010’B
+const BIT12rev c_bits:='100101101010'B
 //The encoded bitfield: 1001 01101010
 // last octet^ ^first octet
 //The buffer will have the following content:
 // 10010110
 // ….1010
 // ^ next field
-//The encoding will result in the octetstring ’96.A’O
+//The encoding will result in the octetstring '96.A'O
 
 ----
 
@@ -2262,11 +2262,11 @@ BIT6 field5
 
 with { variant "FIELDORDER(lsb)" }
 const MyRec_lsb c_pdu := {
-field1:=’1’B // bits of field1: a
-field2:=’00’B // bits of field2: b
-field3:=’111’B // bits of field3: c
-field4:=’0000’B // bits of field4: d
-field5:=’111111’B // bits of field5: e
+field1:='1'B // bits of field1: a
+field2:='00'B // bits of field2: b
+field3:='111'B // bits of field3: c
+field4:='0000'B // bits of field4: d
+field5:='111111'B // bits of field5: e
 }
 
 //Encoding of c_pdu will result in:
@@ -2284,11 +2284,11 @@ BIT6 field5
 
 with { variant "FIELDORDER(msb)" }
 const MyRec_msb c_pdu2 := {
-field1:=’1’B // bits of field1: a
-field2:=’00’B // bits of field2: b
-field3:=’111’B // bits of field3: c
-field4:=’0000’B // bits of field4: d
-field5:=’111111’B // bits of field5: e
+field1:='1'B // bits of field1: a
+field2:='00'B // bits of field2: b
+field3:='111'B // bits of field3: c
+field4:='0000'B // bits of field4: d
+field5:='111111'B // bits of field5: e
 }
 
 //Encoding of c_pdu2 will result in:
@@ -2319,7 +2319,7 @@ Examples:
 type hexstring HEX_high
 with {variant "HEXORDER(high)"}
 
-const HEX_high c_hexs := ’12345’H
+const HEX_high c_hexs := '12345'H
 //The encoded bitfield: 0101 00110100 00010010
 // last octet^ ^first octet
 
@@ -2328,12 +2328,12 @@ const HEX_high c_hexs := ’12345’H
 // 00110100 34
 // ….0101 .5
 // ^ next field
-//The encoding will result in the octetstring ’1234.5’O
+//The encoding will result in the octetstring '1234.5'O
 
 //Example number 2)
 type hexstring HEX_low
 with {variant "HEXORDER(low)"}
-const HEX_low c_hexl := ’12345’H
+const HEX_low c_hexl := '12345'H
 
 //The encoded bitfield: 0101 00110100 00010010
 // last octet^ ^first octet
@@ -2342,19 +2342,19 @@ const HEX_low c_hexl := ’12345’H
 // 01000011 43
 // ….0101 .5 ←not twisted!
 // ^ next field
-//The encoding will result in the octetstring ’2143.5’O
+//The encoding will result in the octetstring '2143.5'O
 
 //Example number 3)
 type octetstring OCT
 with {variant "HEXORDER(high)"}
 
-const OCT c_hocts := ’1234’O
+const OCT c_hocts := '1234'O
 //The encoded bitfield: 00110100 00010010
 // last octet^ ^first octet
 //The buffer will have the following content:
 // 00100001 21
 // 01000011 43
-//The encoding will result in the octetstring ’2143’O
+//The encoding will result in the octetstring '2143'O
 ----
 
 *CSN.1 L/H*
@@ -2412,7 +2412,7 @@ Examples:
 //Example number 1)
 octetstring OCTn
 with {variant "EXTENSION_BIT(reverse)"}
-const OCTn c_octs:=’586211’O
+const OCTn c_octs:='586211'O
 
 //The encoding will have the following result:
 // 11011000
@@ -2420,7 +2420,7 @@ const OCTn c_octs:=’586211’O
 // 00010001
 // ˆ the overwritten EXTENSION_BITs
 
-//The encoding will result in the octetstring ’D8E211’O
+//The encoding will result in the octetstring 'D8E211'O
 //Example number 2)
 
 type record Rec3 {
@@ -2432,10 +2432,10 @@ BIT1 extbit2 optional
 
 with { variant "EXTENSION_BIT(yes)" }
 const Rec3 c_MyRec{
-field1:=’1000001’B,
-extbit1:=’1’B,
-field2:=’1011101’B,
-extbit2:=’0’B
+field1:='1000001'B,
+extbit1:='1'B,
+field2:='1011101'B,
+extbit2:='0'B
 }
 
 //The encoding will have the following result:
@@ -2443,7 +2443,7 @@ extbit2:=’0’B
 // 11011101
 // ˆ the overwritten EXTENSION_BITs
 
-The encoding will result in the octetstring ’41DD’O
+The encoding will result in the octetstring '41DD'O
 
 //Example number 3)
 type record Rec4{
@@ -2454,8 +2454,8 @@ BIT1 extbit
 type record of Rec4 RecList
 with { variant "EXTENSION_BIT(yes)"}
 const RecList c_recs{
-{ field1:=’10010011011’B, extbit:=’1’B}
-{ field1:=’11010111010’B, extbit:=’0’B}
+{ field1:='10010011011'B, extbit:='1'B}
+{ field1:='11010111010'B, extbit:='0'B}
 }
 
 //The encoding will have the following result:
@@ -2464,7 +2464,7 @@ const RecList c_recs{
 // 11101011
 // ˆ the overwritten EXTENSION_BITs
 
-//The encoding will result in the octetstring ’9BA4EB’O
+//The encoding will result in the octetstring '9BA4EB'O
 ----
 
 *EXTENSION_BIT_GROUP*
@@ -2505,16 +2505,16 @@ variant "EXTENSION_BIT_GROUP(yes,octet4info,extbit4)"
 }
 
 const MyPDU c_pdu:={
-header:=’0F’O,
-octet2info:=’1011011’B,
-extbit1:= ’0’B,
+header:='0F'O,
+octet2info:='1011011'B,
+extbit1:= '0'B,
 octet2ainfo:= omit,
 extbit2:= omit,
-octet3:=’00’O,
-octet4info:=’0110001’B,
-extbit3:=’1’B,
-octet4ainfo:=’0011100’B,
-extbit4:=’0’B,
+octet3:='00'O,
+octet4info:='0110001'B,
+extbit3:='1'B,
+octet4ainfo:='0011100'B,
+extbit4:='0'B,
 }
 
 //The encoding will have the following result:
@@ -2524,7 +2524,7 @@ extbit4:=’0’B,
 // **0**0110001
 // **1**0011100
 // ˆ the overwritten extension bits
-//The encoding will result in the octetstring: ’0FDB00319C’O
+//The encoding will result in the octetstring: '0FDB00319C'O
 ----
 
 ==== Attributes Controlling Padding
@@ -2557,9 +2557,9 @@ variant "ALIGN(left)";
 variant "FIELDLENGTH(10)"
 }
 
-const OCT10 c_oct := ’0102030405’O
-//Encoded value: ’01020304050000000000’O
-//The decoded value: ’01020304050000000000’O
+const OCT10 c_oct := '0102030405'O
+//Encoded value: '01020304050000000000'O
+//The decoded value: '01020304050000000000'O
 //Example number 2)
 type octetstring OCT10length5 length(5)
 with {
@@ -2567,9 +2567,9 @@ variant "ALIGN(left)";
 variant "FIELDLENGTH(10)"
 }
 
-const OCT10length5 c_oct5 := ’0102030405’O
-//Encoded value: ’01020304050000000000’O
-//The decoded value: ’0102030405’O
+const OCT10length5 c_oct5 := '0102030405'O
+//Encoded value: '01020304050000000000'O
+//The decoded value: '0102030405'O
 ----
 
 *PADDING*
@@ -2600,12 +2600,12 @@ Examples:
 //Example number 1)
 type BIT5 Bit5padded with { variant "PADDING(yes)"}
 
-const Bit5padded c_bits:=’10011’B
+const Bit5padded c_bits:='10011'B
 
 //The encoding will have the following result:
 // 00010011
 // ˆ the padding bits
-//The encoding will result in the octetstring ’13’O
+//The encoding will result in the octetstring '13'O
 
 //Example number 2)
 type record Paddedrec{
@@ -2614,8 +2614,8 @@ BIT7 field2
 } with { variant "PADDING(yes)"}
 
 const Paddedrec c_myrec:={
-field1:=’101’B,
-field2:=’0110100’B
+field1:='101'B,
+field2:='0110100'B
 }
 
 //The encoding will have the following result:
@@ -2623,11 +2623,11 @@ field2:=’0110100’B
 // 00000001
 // ˆ the padding bits
 
-//The encoding will result in the octetstring ’A501’O
+//The encoding will result in the octetstring 'A501'O
 
 //Example number 3): padding to 32 bits
 type BIT5 Bit5padded_dw with { variant "PADDING(dword32)"}
-const Bit5padded_dw c_dword:=’10011’B
+const Bit5padded_dw c_dword:='10011'B
 //The encoding will have the following result:
 // 00010011
 // 00000000
@@ -2635,7 +2635,7 @@ const Bit5padded_dw c_dword:=’10011’B
 // 00000000
 // ˆ the padding bits
 
-The encoding will result in the octetstring ’13000000’O
+The encoding will result in the octetstring '13000000'O
 
 //Example number 4)
 type record Paddedrec_dw{
@@ -2643,8 +2643,8 @@ BIT3 field1,
 BIT7 field2
 } with { variant "PADDING(dword32)"}
 const Paddedrec_dw c_dwords:={
-field1:=’101’B,
-field2:=’0110100’B
+field1:='101'B,
+field2:='0110100'B
 }
 
 //The encoding will have the following result:
@@ -2653,7 +2653,7 @@ field2:=’0110100’B
 // 00000000
 // 00000000
 // ˆ the padding bits
-The encoding will result in the octetstring ’A5010000’O
+The encoding will result in the octetstring 'A5010000'O
 ----
 
 *PADDING_PATTERN*
@@ -2662,11 +2662,11 @@ Attribute syntax: `PADDING_PATTERN(<parameter>)`
 
 Parameters allowed: bitstring
 
-Default value: `’0’B`
+Default value: `'0'B`
 
 Can be used with: any type with attributes `PADDING` or `PREPADDING`.
 
-Description: This attribute specifies padding pattern used by padding mechanism. The default padding pattern is ’0’B.If the specified padding pattern is shorter than the padding space, then the padding pattern is repeated.
+Description: This attribute specifies padding pattern used by padding mechanism. The default padding pattern is '0'B.If the specified padding pattern is shorter than the padding space, then the padding pattern is repeated.
 
 Comment: For a particular field or type only one padding pattern can be specified for `PADDING` and `PREPADDING`.
 
@@ -2675,7 +2675,7 @@ Examples:
 ----
 //Example number 1)
 type BIT8 Bit8padded with {
-variant "PREPADDING(yes), PADDING_PATTERN(’1’B)"
+variant "PREPADDING(yes), PADDING_PATTERN('1'B)"
 }
 
 type record PDU {
@@ -2684,19 +2684,19 @@ Bit8padded field2
 } with {variant ""}
 
 const PDU c_myPDU:={
-field1:=’101’B,
-field2:=’10010011’B
+field1:='101'B,
+field2:='10010011'B
 }
 
 //The encoding will have the following result:
 // 11111101
 // 10010011
 //the padding bits are indicated in bold
-//The encoding will result in the octetstring ’FD93’O
+//The encoding will result in the octetstring 'FD93'O
 //Example number 2): padding to 32 bits
 
 type BIT8 Bit8pdd with {
-variant "PREPADDING(dword32), PADDING_PATTERN(’10’B)"
+variant "PREPADDING(dword32), PADDING_PATTERN('10'B)"
 }
 
 type record PDU{
@@ -2704,8 +2704,8 @@ BIT3 field1,
 Bit8pdd field2
 } with {variant ""}
 const PDU c_myPDUplus:={
-field1:=’101’B,
-field2:=’10010011’B
+field1:='101'B,
+field2:='10010011'B
 }
 
 //The encoding will have the following result:
@@ -2716,7 +2716,7 @@ field2:=’10010011’B
 // 10010011
 //The padding bits are indicated in bold
 
-//The encoding will result in the octetstring ’5555555593’O
+//The encoding will result in the octetstring '5555555593'O
 ----
 
 *PADDALL*
@@ -2727,7 +2727,7 @@ Can be used with: `record` or `set`.
 
 Description: If `PADDALL` is specified, the padding parameter specified for a whole `record` or `set` will be valid for every field of the structured type in question.
 
-NOTE: If a different padding parameter is specified for any fields it won’t be overridden by the padding parameter specified for the record.
+NOTE: If a different padding parameter is specified for any fields it won't be overridden by the padding parameter specified for the record.
 
 Examples:
 [source]
@@ -2738,15 +2738,15 @@ BIT3 field1,
 BIT7 field2
 } with { variant "PADDING(yes)"}
 const Paddedrec c_myrec:={
-field1:=’101’B,
-field2:=’0110100’B
+field1:='101'B,
+field2:='0110100'B
 }
 
 //The encoding will have the following result:
 // 10100101
 // 00000001
 // ˆ the padding bits
-//The encoding will result in the octetstring ’A501’O
+//The encoding will result in the octetstring 'A501'O
 
 //Example number 2)
 
@@ -2756,8 +2756,8 @@ BIT7 field2
 } with { variant "PADDING(yes), PADDALL"}
 
 const Padddd c_myrec:={
-field1:=’101’B,
-field2:=’0110100’B
+field1:='101'B,
+field2:='0110100'B
 }
 
 //The encoding will have the following result:
@@ -2765,7 +2765,7 @@ field2:=’0110100’B
 // 00110100
 // ˆ the padding bits
 
-//The encoding will result in the octetstring ’0534’O
+//The encoding will result in the octetstring '0534'O
 
 //Example number 3)
 
@@ -2776,9 +2776,9 @@ BIT7 field3
 } with { variant "PADDING(yes), PADDALL"}
 
 const Padded c_ourrec:={
-field1:=’101’B,
-field2:=’10011’B,
-field3:=’0110100’B
+field1:='101'B,
+field2:='10011'B,
+field3:='0110100'B
 }
 
 //The encoding will have the following result:
@@ -2787,9 +2787,9 @@ field3:=’0110100’B
 // 00110100
 // ˆ the padding bits
 
-//The encoding will result in the octetstring ’051334’O
+//The encoding will result in the octetstring '051334'O
 
-//Example number 4): field1 shouldn’t be padded
+//Example number 4): field1 shouldn't be padded
 
 type record Paddd{
 BIT3 field1,
@@ -2798,16 +2798,16 @@ BIT7 field3
 } with { variant "PADDING(yes), PADDALL";
 variant (field1) "PADDING(no)" }
 const Paddd c_myrec:={
-field1:=’101’B,
-field2:=’10011’B,
-field3:=’0110100’B
+field1:='101'B,
+field2:='10011'B,
+field3:='0110100'B
 }
 
 //The encoding will have the following result:
 // 10011101 < field1 is not padded!!!
 // 00110100
 // ˆ the padding bit
-//The encoding will result in the octetstring ’9D34’O
+//The encoding will result in the octetstring '9D34'O
 ----
 
 *PREPADDING*
@@ -2843,15 +2843,15 @@ BIT3 field1,
 bit8padded field2
 } with {variant ""}
 const PDU c_myPDU:={
-field1:=’101’B,
-field2:=’10010011’B
+field1:='101'B,
+field2:='10010011'B
 }
 
 //The encoding will have the following result:
 // 00000101
 // 10010011
 //The padding bits are indicated in bold
-//The encoding will result in the octetstring ’0593’O
+//The encoding will result in the octetstring '0593'O
 //Example number 2): padding to 32 bits
 
 type BIT8 bit8padded_dw with { variant "PREPADDING(dword32)"}
@@ -2860,8 +2860,8 @@ BIT3 field1,
 bit8padded_dw field2
 } with {variant ""}
 const PDU myPDU:={
-field1:=’101’B,
-field2:=’10010011’B
+field1:='101'B,
+field2:='10010011'B
 }
 
 //The encoding will have the following result:
@@ -2873,7 +2873,7 @@ field2:=’10010011’B
 
 //The padding bits are indicated in bold
 
-//The encoding will result in the octetstring ’0500000093’O
+//The encoding will result in the octetstring '0500000093'O
 ----
 
 ==== Attributes of Length and Pointer Field
@@ -2915,7 +2915,7 @@ octetstring field2
 }
 
 with {
-variant (len) “LENGTHTO(field1);
+variant (len) "LENGTHTO(field1);
 variant (len) "UNIT(bits)"
 }
 
@@ -2928,7 +2928,7 @@ octetstring field2
 }
 
 with {
-variant (len) “LENGTHTO(len, field1, field2)
+variant (len) "LENGTHTO(len, field1, field2)
 }
 
 //Example number 3)
@@ -2941,7 +2941,7 @@ octetstring field3
 }
 
 with {
-variant (len) “LENGTHTO(field1, field3)
+variant (len) "LENGTHTO(field1, field3)
 // field2 is excluded!
 }
 
@@ -2960,15 +2960,15 @@ length_union length_field,
 octetstring data
 } with {
 variant (length_field)
-“CROSSTAG(short_length_field, flag = ’0’B
-long_length_field, flag = ’1’B)“;
+"CROSSTAG(short_length_field, flag = '0'B
+long_length_field, flag = '1'B)";
 variant (length_field) "LENGTHTO(data)"
 }
 
 //Const for short data. Data is shorter than 127 octets:
 
 const Rec4(octetstring oc):={
-flag :=’0’B,
+flag :='0'B,
 length_field:={short_length_field:=0},
 data := oc
 }
@@ -2976,7 +2976,7 @@ data := oc
 //Const for long data. Data is longer than 126 octets:
 
 const Rec4(octetstring oc):={
-flag :=’1’B,
+flag :='1'B,
 length_field:={long_length_field:=0},
 data := oc
 }
@@ -3091,12 +3091,12 @@ const Rec c_rec := {
 ptr1 := <any value>,
 ptr2 := <any value>,
 ptr3 := <any value>,
-field1 := ’010203’O,
-field2 := ’040506’O,
-field3 := ’070809’O
+field1 := '010203'O,
+field2 := '040506'O,
+field3 := '070809'O
 }
 
-//Encoded c_rec: ’030507010203040506070809’O//The value of ptr1: 03
+//Encoded c_rec: '030507010203040506070809'O//The value of ptr1: 03
 //PTROFFSET and UNIT are not set, so the default (0) is being //using.
 //The starting position of ptr1: 0th bit
 //The starting position of field1= 3 * 8 + 0 = 24th bit.
@@ -3210,7 +3210,7 @@ Default value: none
 
 Can be used with: `optional` fields of a `record` or `set`.
 
-Description: Within records some fields may indicate the presence of another optional field. The attribute `PRESENCE` is used to describe these cases. Each optional field can have a `PRESENCE` definition. The syntax of the `PRESENCE` attribute is the following: a `PRESENCE` definition is a presence_indicator expression. `Presence_indicators` are of form `<key> = <constant> or {<key1> = <constant1>, <key2> = <constant2>, … <keyN> = <constantN>}` where each key is a field(.nestedField) of the `record`, `set` or `union` and each constant is a TTCN–3 constant expression (for example, `22`, `’25’O` or `’1001101’B`).
+Description: Within records some fields may indicate the presence of another optional field. The attribute `PRESENCE` is used to describe these cases. Each optional field can have a `PRESENCE` definition. The syntax of the `PRESENCE` attribute is the following: a `PRESENCE` definition is a presence_indicator expression. `Presence_indicators` are of form `<key> = <constant> or {<key1> = <constant1>, <key2> = <constant2>, … <keyN> = <constantN>}` where each key is a field(.nestedField) of the `record`, `set` or `union` and each constant is a TTCN–3 constant expression (for example, `22`, `'25'O` or `'1001101'B`).
 
 NOTE: The PRESENCE attribute can identify the presence of the whole record. In that case the field reference must be omitted.
 
@@ -3223,13 +3223,13 @@ OCT3 field optional
 }
 
 with {
-variant (field) "PRESENCE(presence = ’1’B)"
+variant (field) "PRESENCE(presence = '1'B)"
 }
 
 type record R2{
 OCT1 header,
 OCT1 data
-} with {variant "PRESENCE(header=’11’O)"}
+} with {variant "PRESENCE(header='11'O)"}
 ----
 
 *TAG*
@@ -3242,7 +3242,7 @@ Default value: none
 
 Can be used with: `record` or `set`.
 
-Description: The purpose of the attribute `TAG` is to identify specific values in certain fields of the `set`, `record` elements or `union` choices. When the `TAG` is specified to a `record` or a `set`, the presence of the given element can be identified at decoding. When the `TAG` belongs to a `union`, the union member can be identified at decoding. The attribute is a list of `field_identifications`. Each `field_identification` consists of a record, set or union field name and a `presence_indicator` expression separated by a comma (,). `Presence_indicators` are of form `<key> = <constant>` or `{ <key1> = <constant1>, <key2> = <constant2>, … <keyN> = <constantN> }` where each key is a field(.nestedField) of the `record`, `set` or `union` and each constant is a TTCN–3 constant expression (e.g.` 22`, `’25’O` or `’1001101’B`).There is a special presence_indicator: `OTHERWISE`. This indicates the default union member in a union when the TAG belongs to union.
+Description: The purpose of the attribute `TAG` is to identify specific values in certain fields of the `set`, `record` elements or `union` choices. When the `TAG` is specified to a `record` or a `set`, the presence of the given element can be identified at decoding. When the `TAG` belongs to a `union`, the union member can be identified at decoding. The attribute is a list of `field_identifications`. Each `field_identification` consists of a record, set or union field name and a `presence_indicator` expression separated by a comma (,). `Presence_indicators` are of form `<key> = <constant>` or `{ <key1> = <constant1>, <key2> = <constant2>, … <keyN> = <constantN> }` where each key is a field(.nestedField) of the `record`, `set` or `union` and each constant is a TTCN–3 constant expression (e.g.` 22`, `'25'O` or `'1001101'B`).There is a special presence_indicator: `OTHERWISE`. This indicates the default union member in a union when the TAG belongs to union.
 
 NOTE: `TAG` works on non-optional fields of a record as well. It is recommended to use the attributes `CROSSTAG` or `PRESENCE` leading to more effective decoding.
 
@@ -3263,7 +3263,7 @@ InnerRec field3 optional
 }
 
 with {
-variant “TAG(field1, tag = 1;
+variant "TAG(field1, tag = 1;
 field2, tag = 2;
 field3, tag = 3)"
 }
@@ -3276,7 +3276,7 @@ InnerRec field3
 }
 
 with {
-variant “TAG(field1, tag = 1;
+variant "TAG(field1, tag = 1;
 field2, tag = 2;
 field3, OTHERWISE)"
 }
@@ -3306,7 +3306,7 @@ Default value: none
 
 Can be used with: `union` fields of `records`.
 
-Description: When one field of a `record` specifies the union member of another field of a record, CROSSTAG definition is used. The syntax of the CROSSTAG attribute is the following. Each union field can have a `CROSSTAG` definition. A `CROSSTAG` definition is a list of union `field_identifications`. Each `field_identification` consists of a union field name and a `presence_indicator` expression separated by a comma (,). `Presence_indicators` are of form `<key> = <constant>` or `{<key1> = <constant1>`, `<key2> = <constant2>`, `… <keyN> = <constantN>}` where each key is a field(.nestedField) of the `record`, `set` or `union` and each constant is a TTCN–3 constant expression (for example, `22`, `’25’O` or `’1001101’B`).There is a special `presence_indicator`: `OTHERWISE`. This indicates the default union member in union.
+Description: When one field of a `record` specifies the union member of another field of a record, CROSSTAG definition is used. The syntax of the CROSSTAG attribute is the following. Each union field can have a `CROSSTAG` definition. A `CROSSTAG` definition is a list of union `field_identifications`. Each `field_identification` consists of a union field name and a `presence_indicator` expression separated by a comma (,). `Presence_indicators` are of form `<key> = <constant>` or `{<key1> = <constant1>`, `<key2> = <constant2>`, `… <keyN> = <constantN>}` where each key is a field(.nestedField) of the `record`, `set` or `union` and each constant is a TTCN–3 constant expression (for example, `22`, `'25'O` or `'1001101'B`).There is a special `presence_indicator`: `OTHERWISE`. This indicates the default union member in union.
 
 NOTE: The difference between the `TAG` and `CROSSTAG` concept is that in case of `TAG` the field identifier is inside the field to be identified. In case of `CROSSTAG` the field identifier can either precede or succeed the union field it refers to. If the field identifier succeeds the union, they must be in the same record, the union field must be mandatory and all of its embedded types must have the same fixed size.
 
@@ -3326,7 +3326,7 @@ AnyPdu pdu
 }
 
 with {
-variant (pdu) “CROSSTAG( type1, { protocolId = 1,
+variant (pdu) "CROSSTAG( type1, { protocolId = 1,
 protocolId = 11 };
 type2, protocolId = 2;
 type3, protocolId = 3)"
@@ -3355,19 +3355,19 @@ Examples:
 type record R1{
 OCT1 header,
 OCT1 data
-} with {variant "PRESENCE(header=’AA’O)"}
+} with {variant "PRESENCE(header='AA'O)"}
 
 type record of R1 R1list;
 
 type record R2{
 OCT1 header,
 OCT1 data
-} with {variant "PRESENCE(header=’11’O)"}
+} with {variant "PRESENCE(header='11'O)"}
 
 type record R3{
 OCT1 header,
 OCT1 data
-} with {variant "PRESENCE(header=’22’O)"}
+} with {variant "PRESENCE(header='22'O)"}
 
 type set S1 {
 R2 field1,
@@ -3387,19 +3387,19 @@ The decoded value of S1 type:
 
 {
 field1:={
-header:=’11’O,
-data:=’45’O
+header:='11'O,
+data:='45'O
 },
 
 field2:={
-header:=’22’O,
-data:=’67’O
+header:='22'O,
+data:='67'O
 },
 
 field3:={
-{header:=’AA’O,data:=’01’O},
-{header:=’AA’O,data:=’02’O},
-{header:=’AA’O,data:=’03’O}
+{header:='AA'O,data:='01'O},
+{header:='AA'O,data:='02'O},
+{header:='AA'O,data:='03'O}
 }
 }
 
@@ -3451,7 +3451,7 @@ with {
   variant (f2) "FORCEOMIT(opt1)"
 }
 
-// Decoding ‘0102030405’O into a value of type OuterRec1 results in:
+// Decoding '0102030405'O into a value of type OuterRec1 results in:
 // {
 //   f1 := 1,
 //   f2 := { opt1 := omit, opt2 := 2, opt3 := 3, mand := 4 },
@@ -3465,7 +3465,7 @@ with {
   variant (f) "FORCEOMIT(f2.opt2)"
 }
 
-// Decoding ‘01020304’O into a value of type OuterRec2 results in:
+// Decoding '01020304'O into a value of type OuterRec2 results in:
 // {
 //   f := {
 //     f1 := 1,
@@ -3483,7 +3483,7 @@ with {
   variant (f2) "FORCEOMIT(f2.opt2), FORCEOMIT(f2.opt3)"
 }
 
-// Decoding ‘010203040506’O into a value of type OuterRec3 results in:
+// Decoding '010203040506'O into a value of type OuterRec3 results in:
 // {
 //   f1 := {
 //     f1 := 1,
@@ -3516,9 +3516,9 @@ The data starts with a series of ones followed by a zero. This represents the le
 
 Comment: Since the length of the encoding is variable, attribute `FIELDLENGTH` is ignored. Furthermore, `IntX` also sets `BITORDER` and `BITORDERINFIELD` to `msb`, and `BYTEORDER` to first, overwriting any manual settings of these attributes.
 
-Only attribute `COMP` can be used together with `IntX` (if it’s set to `signbit`, then the sign bit will be the first bit after the length).
+Only attribute `COMP` can be used together with `IntX` (if it's set to `signbit`, then the sign bit will be the first bit after the length).
 
-Restrictions: Using `IntX` in a `record` or `set` with `FIELDORDER` set to `lsb` is only supported if the `IntX` field starts at the beginning of a new octet. A compiler error is displayed otherwise. The `IntX` field may start anywhere if the parent `record`/`set’s` `FIELDORDER` is set to `msb`.
+Restrictions: Using `IntX` in a `record` or `set` with `FIELDORDER` set to `lsb` is only supported if the `IntX` field starts at the beginning of a new octet. A compiler error is displayed otherwise. The `IntX` field may start anywhere if the parent `record`/`set's` `FIELDORDER` is set to `msb`.
 
 Examples:
 [source]
@@ -3541,7 +3541,7 @@ type integer IntX_signed with { variant "IntX, COMP(signbit)" }
 // length bits ^^
 // ^ sign bit
 
-// Example 3: Standalone IntX integer type with 2’s complement:
+// Example 3: Standalone IntX integer type with 2's complement:
 type integer IntX_compl with { variant "IntX, COMP(2scompl)" }
 // Encoding integer -2184:
 // 10110111 01111000
@@ -3562,12 +3562,12 @@ variant (ix) "IntX";
 variant (bs) "FIELDLENGTH(8)";
 }
 
-// Encoding record value { i := 716, ix := 716, bs := ‘10101010’B }:
+// Encoding record value { i := 716, ix := 716, bs := '10101010'B }:
 // 00101100 11001000 00101100 11001010 10100000
 // ^^^^^^^^ ^^^^ field `i' (same encoding as `ix', but with no length bits)
 // field `ix' ^^^^ ^^^^^^^^ ^^^^ (the first 2 bits are the length bits)
 // field `bs' ^^^^ ^^^^
-// Note: setting the record’s FIELDORDER to `lsb' in this case is not supported
+// Note: setting the record's FIELDORDER to `lsb' in this case is not supported
 // and would cause the mentioned compiler error.
 ----
 
@@ -3603,11 +3603,11 @@ Field2 field2
 
 with { variant "TOPLEVEL( BITORDER(lsb) )" }
 const WholePDU c_pdu := {
-’12’O,
-’12’O
+'12'O,
+'12'O
 }
 
-//Encoding of c_pdu will result in ’4848’O.
+//Encoding of c_pdu will result in '4848'O.
 ----
 
 [[ttcn-3-types-and-their-attributes]]
@@ -3632,7 +3632,7 @@ Example:
 [source]
 ----
 *//Example number 1): variable length bitstring*
-const bitstring c_mystring:=’1011000101’B
+const bitstring c_mystring:='1011000101'B
 //The resulting bitfield: 1011000101
 //The encoding will have the following result:
 // 11000101
@@ -3640,19 +3640,19 @@ const bitstring c_mystring:=’1011000101’B
 
 *//Example number 2): fixed length bitstring*
 type bitstring BIT7 with { variant "FIELDLENGTH(7)" }
-const BIT7 c_ourstring:=’0101’B
+const BIT7 c_ourstring:='0101'B
 //The resulting bitfield: 0000101
 
 *//Example number 3): left aligned bitstring*
 type bitstring BIT7align with {
 variant "FIELDLENGTH(7), ALIGN(left)" }
-const BIT7align c_yourstring:=’0101’B
+const BIT7align c_yourstring:='0101'B
 //The resulting bitfield: 0101000
 ----
 
 *BOOLEAN*
 
-Coding: The `boolean` value `true` coded as ’1’B,the `boolean` value `false` coded as ’0’B.If `FIELDLENGTH` is specified, the given number of ones (`true`) or zeros (`false`) is coded. If the decoded bitfield is zero the decoded value will be false otherwise true. The default `FIELDLENGTH` for `boolean` type is 1.
+Coding: The `boolean` value `true` coded as '1'B,the `boolean` value `false` coded as '0'B.If `FIELDLENGTH` is specified, the given number of ones (`true`) or zeros (`false`) is coded. If the decoded bitfield is zero the decoded value will be false otherwise true. The default `FIELDLENGTH` for `boolean` type is 1.
 
 Attributes allowed: `FIELDLENGTH (0)`, `N bit` (0).
 
@@ -3793,7 +3793,7 @@ const SingleFloat c_float:=1432432.125
 // 11011011 MMMMMMMM
 // 10000001 MMMMMMMM
 
-//The encoding will result in the octetstring ’49AEDB81’O
+//The encoding will result in the octetstring '49AEDB81'O
 
 *//Example number 2): double precision float*
 type float DoubleFloat
@@ -3823,7 +3823,7 @@ const DoubleFloat c_floatd:=1432432.112232
 
 //The encoding will result in the octetstring
 
-// ’4135DB701CBB3C82’O
+// '4135DB701CBB3C82'O
 ----
 
 *HEXSTRING*
@@ -3843,7 +3843,7 @@ Example:
 [source]
 ----
 *//Example number 1): variable length hexstring*
-const hexstring c_mystring:=’5AF’H
+const hexstring c_mystring:='5AF'H
 
 //The resulting bitfield: 1111 10100101
 //The encoding will have the following result:
@@ -3852,7 +3852,7 @@ const hexstring c_mystring:=’5AF’H
 
 *//Example number 2): fixed length hexstring*
 type hexstring HEX4 with { variant "FIELDLENGTH(4)" }
-const HEX4 c_yourstring:=’5AF’H
+const HEX4 c_yourstring:='5AF'H
 //The resulting bitfield: 00001111 10100101
 //The encoding will have the following result:
 // 10100101 A5
@@ -3861,7 +3861,7 @@ const HEX4 c_yourstring:=’5AF’H
 *//Example number 3): left aligned hexstring*
 type hexstring HEX4align with {
 variant "FIELDLENGTH(4), ALIGN(left)" }
-const HEX4align c_ourstring:=’5AF’H
+const HEX4align c_ourstring:='5AF'H
 
 //The resulting bitfield: 11111010 01010000
 //The encoding will have the following result:
@@ -3897,7 +3897,7 @@ const Int12 c_myint:=1052
 // 00011100
 // ….0100
 
-//The same result represented as octetstring: ’1C.4’O
+//The same result represented as octetstring: '1C.4'O
 
 *//Example number 2)*
 
@@ -3909,7 +3909,7 @@ const Int12sg c_mysignedint:=-1052
 //The encoding will have the following result:
 // 00011100
 // ….1100
-//The same result represented as octetstring: ’1C.C’O
+//The same result represented as octetstring: '1C.C'O
 
 *//Example number 3)*
 
@@ -3920,7 +3920,7 @@ const int12c c_hisint:=-1052
 //The encoding will have the following result:
 // 11100111
 // ….1011
-//The same result represented as octetstring: ’E7.B’O
+//The same result represented as octetstring: 'E7.B'O
 ----
 
 *OCTETSTRING*
@@ -3940,7 +3940,7 @@ Example:
 [source]
 ----
 *//Example number 1): variable length octetstring*
-const octetstring c_mystring:=’25AF’O
+const octetstring c_mystring:='25AF'O
 
 //The resulting bitfield: 10101111 00100101
 //The encoding will have the following result:
@@ -3950,7 +3950,7 @@ const octetstring c_mystring:=’25AF’O
 *//Example number 2): fixed length octetstring*
 
 type octetstring OCT3 with { variant "FIELDLENGTH(3)" }
-const OCT3 c_yourstring:=’25AF’H
+const OCT3 c_yourstring:='25AF'H
 //The resulting bitfield: 00000000 10101111 00100101
 //The encoding will have the following result:
 // 00100101 25
@@ -3960,7 +3960,7 @@ const OCT3 c_yourstring:=’25AF’H
 *//Example number 3): left aligned octetstring*
 type octetstring OCT3align with {
 variant "FIELDLENGTH(3), ALIGN(left)" }
-const OCT3align c_string:=’25AF’H
+const OCT3align c_string:='25AF'H
 
 //The resulting bitfield: 10101111 00100101 00000000
 //The encoding will have the following result:
@@ -4005,16 +4005,16 @@ type union MyUnion{
 Rec field1,
 Rec field2,
 Rec field3
-} with { variant "TAG( field1,{key = ’56’O, key=’7A’}; field2, key = ’FF’; field3,{key = ’A4’O, key = ’99’O})"
+} with { variant "TAG( field1,{key = '56'O, key='7A'}; field2, key = 'FF'; field3,{key = 'A4'O, key = '99'O})"
 }
 
 *//Example number 1): successful encoding*
 const MyUnion c_PDU:={
-field1:={ key:=’7A’O, values:=’B2’O}
+field1:={ key:='7A'O, values:='B2'O}
 }
 
 //Chosen field: field1
-//Value of key field: ’7A’O; valid
+//Value of key field: '7A'O; valid
 //No substitution will take place.
 //The encoding will have the following result:
 // 01111010 7A
@@ -4023,12 +4023,12 @@ field1:={ key:=’7A’O, values:=’B2’O}
 *//Example number 2): key field substituted*
 
 const MyUnion c_PDU2:={
-field1:={ key:=’00’O, values:=’B2’O}
+field1:={ key:='00'O, values:='B2'O}
 }
 
 //Chosen field: field1
-//Value of key field: ’00’O not valid
-//The value of key field will be substituted with:’56’O
+//Value of key field: '00'O not valid
+//The value of key field will be substituted with:'56'O
 //The encoding will have the following result:
 // 01010110 56
 // 10110010 B2
@@ -4142,9 +4142,9 @@ type record Subject{
 charstring subject_value
 }
 
-with { variant “BEGIN(’Subject: ’,’
+with { variant "BEGIN('Subject: ','
 (Subject[ ]#(,):[ ]#(,))|"
-“(s[ ]#(,):[ ]#(,))’,
+"(s[ ]#(,):[ ]#(,))',
 case_insensitive)"
 }
 
@@ -4183,11 +4183,11 @@ type record Subject{
 charstring subject_value
 }
 
-with { variant “BEGIN(’Subject: ’,’
+with { variant "BEGIN('Subject: ','
 (Subject[ ]#(,):[ ]#(,))|"
-“(s[ ]#(,):[ ]#(,))’,
-case_insensitive)“;
-variant "END(’’,’([])|([])’)"
+"(s[ ]#(,):[ ]#(,))',
+case_insensitive)";
+variant "END('','([])|([])')"
 }
 
 var Subject v_subj:= "the_subject";
@@ -4223,8 +4223,8 @@ charstring field_2
 }
 
 with {
-variant "BEGIN(’Header: ’)"
-variant "SEPARATOR(’;’)"
+variant "BEGIN('Header: ')"
+variant "SEPARATOR(';')"
 }
 
 var Rec_1 v_rec:={field1:="value_field1",
@@ -4271,16 +4271,16 @@ Usable attributes: `length`, `leading0`
  `attr=value[;attr=value]` +
 Usable attribute: `length`
 |`boolean` |The encoded value of `true` and `false` value: +
-`true:’token’[;false:’token’]` +
-The default encoded value of `true` is ’true’; the default encoded value of `false` is ’false’
+`true:'token'[;false:'token']` +
+The default encoded value of `true` is 'true'; the default encoded value of `false` is 'false'
 |The matching pattern of the value true and false: +
-`true:{’pattern’[,modifier]}[;false:{’pattern’[,modifier]}]` +
+`true:{'pattern'[,modifier]}[;false:{'pattern'[,modifier]}]` +
 The default decoding method is case sensitive
 |`enumerated` |The encoded value of enumeration: +
-`value:’token’[;value:’token’]` +
+`value:'token'[;value:'token']` +
 The default encoded value of enumerations is the TTCN–3 identifier of the enumerations.
  |The matching pattern of enumerations: +
-`value:{’pattern’[,modifier]}[;value:{’pattern’[,modifier]}]`
+`value:{'pattern'[,modifier]}[;value:{'pattern'[,modifier]}]`
 The default decoding method is case sensitive.
 |`set` `ofrecord` `of` |Not applicable |The format of the decoding rule: +
 `attr=value[;attr=value]` +
@@ -4308,53 +4308,53 @@ variant "TEXT_CODING(length=5;leading0=true)"
 }
 
 var My_int v_a:=4;
-// The encoded value: ’00004’
+// The encoded value: '00004'
 *//Example number 2): integer without leading zero*
 type integer My_int2 with {
 variant "TEXT_CODING(length=5)"
 }
 
 var My_int2 v_aa:=4;
-// The encoded value: ’ 4’
+// The encoded value: ' 4'
 *//Example number 3): character string*
 type charstring My_char with {
 variant "TEXT_CODING(length=5)"
 }
 
-var My_char v_aaa:=’str’;
-// The encoded value: ’ str’
+var My_char v_aaa:='str';
+// The encoded value: ' str'
 *//Example number 4): centered character string*
 
 type charstring My_char2 with {
 variant "TEXT_CODING(length=5;just=center)"
 }
 
-var My_char2 v_aaaa:=’str’;
-// The encoded value: ’ str ’
+var My_char2 v_aaaa:='str';
+// The encoded value: ' str '
 *//Example number 5): character string converted to upper case*
 type charstring My_char3 with {
 variant "TEXT_CODING(length=5;convert=upper_case)"
 }
 
-var my_char3 v_b:=’str’;
+var my_char3 v_b:='str';
 
-// The encoded value: ’ STR’
+// The encoded value: ' STR'
 *//Example number 6): case converted character string*
 
 type charstring My_char4 with {
 variant "TEXT_CODING(convert=upper_case,convert=lower_case)"
 }
 
-var My_char4 v_bb:=’str’;
-// The encoded value: ’STR’
-// The decoded value: ’str’
+var My_char4 v_bb:='str';
+// The encoded value: 'STR'
+// The decoded value: 'str'
 *//Example number 6): boolean*
 type boolean My_bool with {
-variant "TEXT_CODING(true:’good’;false:’bad’)"
+variant "TEXT_CODING(true:'good';false:'bad')"
 }
 
 var my_bool v_bbb=false;
-// The encoded value: ’bad’
+// The encoded value: 'bad'
 ----
 
 [[bnf-of-the-attributes]]
@@ -4365,7 +4365,7 @@ COMMA = ","
 
 SEMI = ";"
 
-token = any valid character literal, "’" must be escaped
+token = any valid character literal, "'" must be escaped
 
 pattern = valid TTCN-3 character pattern, the reference is not supported
 
@@ -4385,9 +4385,9 @@ separator-attr = "SEPARATOR(" encode-token [ COMMA [ match-expr ] [COMMA modifie
 
 coding-attr = "TEXT_CODING(" [ [encoding-rules] [COMMA [decoding-rules] [ COMMA match-expr [COMMA modifier] ] ] ] ")"
 
-encode-token = "’" token "’"
+encode-token = "'" token "'"
 
-match-expr = "’" pattern "’"
+match-expr = "'" pattern "'"
 
 modifier = "case_sensitive" / "case_insensitive"
 
@@ -4776,7 +4776,7 @@ The "name as "" (i.e. freetext is empty) form designates that the TTCN-3 field
 
 The "name as capitalized" and "name as uncapitalized" forms identify that only the first character of the related TTCN3 type or field name shall be changed to lower case or upper case respectively.
 
-The "name as lowercased“ and "name as uppercased" forms identify that each character of the related TTCN3 type or field name shall be changed to lower case or upper case respectively.
+The "name as lowercased" and "name as uppercased" forms identify that each character of the related TTCN3 type or field name shall be changed to lower case or upper case respectively.
 
 The "name all as capitalized", "name all as uncapitalized", "name as lowercased" and "name as uppercased" forms has effect on all direct fields of the TTCN-3 definition to which the encoding instruction is applied (e.g. in case of a structured type definition to the names of its fields in a non-recursive way but not to the name of the definition itself and not to the name of fields embedded to other fields).
 
@@ -5166,7 +5166,7 @@ Attribute syntax: useType
 
 Applicable to (TTCN-3) unions
 
-Description The encoding instruction designates that the encoder shall not use the start-tag and the end-tag around the encoding of the selected alternative (field of the TTCN-3 union type), a type identification attribute (`xsi:type`, where `xsi` is the prefix of the control namespace) will be used to identify the selected alternative. This attribute may be omitted in the case of the first alternative. The decoder shall place the received XML value into the corresponding alternative of the TTCN-3 `union` type, based on the received value and the type identification attribute. The first alternative will be selected if this attribute is not present. The encoder will never insert the type identification attribute for the first alternative. Any attributes the selected alternative might have will be inserted to the union’s XML tag instead (after the type identification attribute, if it exists).
+Description The encoding instruction designates that the encoder shall not use the start-tag and the end-tag around the encoding of the selected alternative (field of the TTCN-3 union type), a type identification attribute (`xsi:type`, where `xsi` is the prefix of the control namespace) will be used to identify the selected alternative. This attribute may be omitted in the case of the first alternative. The decoder shall place the received XML value into the corresponding alternative of the TTCN-3 `union` type, based on the received value and the type identification attribute. The first alternative will be selected if this attribute is not present. The encoder will never insert the type identification attribute for the first alternative. Any attributes the selected alternative might have will be inserted to the union's XML tag instead (after the type identification attribute, if it exists).
 
 The `useType` or `useUnion` coding instructions cannot be applied to anytype.
 
@@ -5212,7 +5212,7 @@ size := 9
 }
 /*
 
-<Product xmlns:xsi=’http://www.w3.org/2001/XMLSchema-instance’ xsi:type=’shoes’ available=’false’>
+<Product xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:type='shoes' available='false'>
 <color>red</color>
 <size>9</size>
 </Product>
@@ -5228,7 +5228,7 @@ size := 9
 
 /*
 
-<Product xmlns:xsi=’http://www.w3.org/2001/XMLSchema-instance’>
+<Product xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'>
 <color>red</color>
 <make>ABC Company</make>
 <size>9</size>
@@ -5359,9 +5359,9 @@ The rules also apply to the following ASN.1 types (if imported to a TTCN-3 modul
 * VideotexString
 * VisibleString
 
-JSON encoding and decoding is allowed for types with the attribute "encode` "`JSON`"'. The basic types specified in the list above support JSON encoding and decoding by default.
+JSON encoding and decoding is allowed for types with the attribute `encode "JSON"`. The basic types specified in the list above support JSON encoding and decoding by default.
 
-The attribute "encode` "`JSON`"' can also be set globally (at module level), allowing JSON coding for all types defined in that module.
+The attribute `encode "JSON"` can also be set globally (at module level), allowing JSON coding for all types defined in that module.
 
 Types imported from ASN.1 modules (from the list above) automatically have JSON coding allowed and cannot have JSON variant attributes.
 
@@ -5408,17 +5408,22 @@ Unions, anytypes and ASN.1 open types are encoded as JSON objects. The object wi
 
 The following sections describe the TTCN-3 attributes that influence JSON coding (only affects TTCN-3 types, ASN.1 types cannot have attributes that influence JSON coding).
 
-All JSON attributes begin with the word `JSON` followed by a colon (`JSON:<attribute>`). Any number of white spaces (spaces and tabs only) can be added between each word or identifier in the attribute syntax, but at least one is necessary if the syntax does not specify a separator (a comma or a colon). The attribute can also start and end with white spaces.
+All JSON attributes begin with the word `JSON` followed by a colon (`JSON:<attribute>`).
+Any number of white spaces (spaces and tabs only) can be added between each word or identifier in the attribute syntax,
+but at least one is necessary if the syntax does not specify a separator (a comma or a colon). The attribute can also start and end with white spaces.
 
 Alternatively the syntaxes defined in <<13-references.adoc#_25, [25]>> can also be used, for the supported attributes (without the need for the `JSON`: prefix).
 
 Example:
 [source]
 ----
-variant(field1) “JSON:omit as null”;			// ok
-variant(field2) “ JSON : omit as null ”;			// ok (extra spaces)
-variant(field3) “JSON	:	omit	as	null”;	// ok (with tabs)
-variant(field4) “JSON:omitasnull”;			// not ok
+variant(field1) "JSON:omit as null";			// ok
+
+variant(field2) " JSON : omit as null ";			// ok (extra spaces)
+
+variant(field3) "JSON	:	omit	as	null";	// ok (with tabs)
+
+variant(field4) "JSON:omitasnull";			// not ok
 ----
 
 *Omit as null*
@@ -5437,13 +5442,13 @@ type record PhoneNumber {
   integer networkPrefix,
   integer localNumber
 } with {
-  variant(countryPrefix) “JSON:omit as null”
+  variant(countryPrefix) "JSON:omit as null"
 }
 var PhoneNumber pn := { omit, 20, 1234567 }
 // JSON code with the attribute:
-// {“countryPrefix”:null,”networkPrefix”:20, “localNumber”:1234567}
+// {"countryPrefix":null,"networkPrefix":20, "localNumber":1234567}
 // JSON code without the attribute:
-// {”networkPrefix”:20, “localNumber”:1234567}
+// {"networkPrefix":20, "localNumber":1234567}
 ----
 
 *Name as …*
@@ -5452,7 +5457,7 @@ Attribute syntax: name as <alias>
 
 Applicable to (TTCN-3): Fields of records, sets and unions
 
-Description: Gives the specified field a different name in the JSON code. The encoder will use this alias instead of the field’s name in TTCN-3, and the decoder will look for this alias when decoding this field. The syntax of the alias is the same as the syntax of an identifier in TITAN (regex: [A-Za-z][A-Za-z0-9_]*).
+Description: Gives the specified field a different name in the JSON code. The encoder will use this alias instead of the field's name in TTCN-3, and the decoder will look for this alias when decoding this field. The syntax of the alias is the same as the syntax of an identifier in TITAN (regex: [A-Za-z][A-Za-z0-9_]*).
 
 Example:
 [source]
@@ -5467,9 +5472,9 @@ type union PersionID {
   variant(name) "JSON:name as Name";
 }
 type record of PersionID PersionIDs;
-var persionIDs pids := { { numericID := 189249214 }, { email := “jdoe@mail.com” }, { name := “John Doe” } };
+var persionIDs pids := { { numericID := 189249214 }, { email := "jdoe@mail.com" }, { name := "John Doe" } };
 // JSON code:
-// [{“ID”:189249214},{“Email”:“jdoe@mail.com”},{“Name”:“John Doe”}]
+// [{"ID":189249214},{"Email":"jdoe@mail.com"},{"Name":"John Doe"}]
 
 ----
 
@@ -5483,7 +5488,7 @@ Description: The union, record, set or anytype will be encoded as a JSON value i
 
 This attribute can also be applied to fields of records, sets or unions, or to the element types of records of, sets of or arrays, if they meet the mentioned restrictions. In this case these fields or elements are encoded as JSON values when they are encoded as part of their larger structure (but the types of these fields or elements might be encoded as JSON objects when encoded alone, or as parts of other structures).
 
-NOTE: Pay close attention to the order of the fields when using this attribute on unions and the anytype. It’s a good idea to declare more restrictive fields before less restrictive ones (e.g.: hexstring is more restrictive than universal charstring, because hexstring can only decode hex digits, whereas universal charstring can decode any character; see also examples below).
+NOTE: Pay close attention to the order of the fields when using this attribute on unions and the anytype. It's a good idea to declare more restrictive fields before less restrictive ones (e.g.: hexstring is more restrictive than universal charstring, because hexstring can only decode hex digits, whereas universal charstring can decode any character; see also examples below).
 
 Examples:
 [source]
@@ -5495,7 +5500,7 @@ type union U1 { // good order of fields
   octetstring os,
   charstring cs
 } with {
-  variant “JSON : as value”
+  variant "JSON : as value"
 }
 
 type union U2 { // bad order of fields
@@ -5504,28 +5509,28 @@ type union U2 { // bad order of fields
   charstring cs,
   octetstring os
 } with {
-  variant “JSON : as value”
+  variant "JSON : as value"
 }
 
 type record of U1 RoU1;
 type record of U2 RoU2;
 
-var RoU1 v_rou1 := { { i := 10 }, { f := 6.4 }, { os := ‘1ED5’O }, { cs := “hello” } };
-var RoU2 v_rou2 := { { i := 10 }, { f := 6.4 }, { os := ‘1ED5’O }, { cs := “hello” } };
+var RoU1 v_rou1 := { { i := 10 }, { f := 6.4 }, { os := '1ED5'O }, { cs := "hello" } };
+var RoU2 v_rou2 := { { i := 10 }, { f := 6.4 }, { os := '1ED5'O }, { cs := "hello" } };
 
 // Both v_rou1 and v_rou2 will be encoded into:
-// [10,6.4,“1ED5”,“hello”]
+// [10,6.4,"1ED5","hello"]
 // This JSON document will be decoded into v_rou1, when decoding as type RoU1,
 // however it will not be decoded into v_rou2, when decoding as RoU2, instead // the float field will decode both numbers and the charstring field will
-// decode both strings: { { f := 10.0 }, { f := 6.4 }, { cs := “1ED5” },
-// { cs := “hello” } };
+// decode both strings: { { f := 10.0 }, { f := 6.4 }, { cs := "1ED5" },
+// { cs := "hello" } };
 
 // Example 2: record with one field
 type record R {
   integer field
 }
 with {
-  variant “JSON: as value”
+  variant "JSON: as value"
 }
 type record of R RoR;
 const RoR c_recs := { { field := 3 }, { field := 6 } };
@@ -5538,15 +5543,15 @@ type record AnyRec {
   anytype val
 }
 with {
-  variant (val) “JSON: as value”;
-  variant (val) “JSON: name as value”;
+  variant (val) "JSON: as value";
+  variant (val) "JSON: name as value";
 }
-const AnyRec c_val := { val := { charstring := “abc” } };
-// is encoded into: {“value”:“abc”}
+const AnyRec c_val := { val := { charstring := "abc" } };
+// is encoded into: {"value":"abc"}
 ...
 } // end of module
 with {
-  extension “anytype integer, charstring”
+  extension "anytype integer, charstring"
 }
 ----
 
@@ -5556,7 +5561,7 @@ Attribute syntax: default(<value>)
 
 Applicable to (TTCN-3): Fields of records and sets
 
-Description: The decoder will set the given value to the field if it does not appear in the JSON document. The <value> can contain the JSON encoding of a value of the field’s type (only basic types are allowed). String types don’t need the starting and ending quotes. All JSON escaped characters can be used, plus the escape sequence ")" will add a ")" (right round bracket) character.
+Description: The decoder will set the given value to the field if it does not appear in the JSON document. The <value> can contain the JSON encoding of a value of the field's type (only basic types are allowed). String types don't need the starting and ending quotes. All JSON escaped characters can be used, plus the escape sequence ")" will add a ")" (right round bracket) character.
 
 The only allowed structured value is the empty structure value `{}`, which can be set for `record of` and `set of` types, as well as empty `record` and `set` types.
 
@@ -5571,15 +5576,15 @@ type record Product {
   octetstring id optional,
   charstring from
 } with {
-  variant(id) “JSON : default (FFFF)”
-  variant(from) “JSON:default(Hungary)”
+  variant(id) "JSON : default (FFFF)"
+  variant(from) "JSON:default(Hungary)"
 }
 
-// { “name” : “Shoe”, “price” : 29.50 } will be decoded into:
-// { name := “Shoe”, price := 29.5, id := ‘FFFF’O, from := “Hungary” }
+// { "name" : "Shoe", "price" : 29.50 } will be decoded into:
+// { name := "Shoe", price := 29.5, id := 'FFFF'O, from := "Hungary" }
 
-// { “name” : “Shirt”, “price” : 12.99, “id” : null } will be decoded into:
-// { name := “Shirt”, price := 12.99, id := omit, from := “Hungary” }
+// { "name" : "Shirt", "price" : 12.99, "id" : null } will be decoded into:
+// { name := "Shirt", price := 12.99, id := omit, from := "Hungary" }
 ----
 
 *Extend*
@@ -5592,7 +5597,7 @@ Description: Extends the JSON schema segment generated for this type with the sp
 
 Both <key> and <value> are strings that can contain any character of a JSON string, plus the escape sequence `)' can be used to add a `)' (right round bracket) character.
 
-This attribute can be set multiple times for a type, each key-value pair is inserted as a field to the end of the type’s schema. Extending a schema with multiple fields with the same key produces a warning. Using one of the keywords used in the generated schema also produces a warning.
+This attribute can be set multiple times for a type, each key-value pair is inserted as a field to the end of the type's schema. Extending a schema with multiple fields with the same key produces a warning. Using one of the keywords used in the generated schema also produces a warning.
 
 This attribute only influences schema generation. It has no effect on encoding or decoding values.
 
@@ -5604,9 +5609,9 @@ Applicable to (TTCN-3) Records, sets and fields of records and sets
 
 Description Allows the encoding and decoding of unbound fields with the help of a meta info field. The attribute can be set to fields individually, or to the whole `record/set` (which is equal to setting the attribute for each of its fields).
 
-The encoder sets the field’s value in JSON to `null` and inserts an extra (meta info) field into the JSON object. The meta info field’s name is `metainfo <fieldname>`, where <fieldname> is the name of the unbound field (or its alias, if the `name as …` attribute is set). Its value is `unbound` (as a JSON string).
+The encoder sets the field's value in JSON to `null` and inserts an extra (meta info) field into the JSON object. The meta info field's name is `metainfo <fieldname>`, where <fieldname> is the name of the unbound field (or its alias, if the `name as …` attribute is set). Its value is `unbound` (as a JSON string).
 
-The decoder accepts the meta info field regardless of its position in the JSON object (it can even appear before the field it refers to). If the meta info field’s value is not `unbound`, or it refers to a field that does not exist or does not have this attribute set, then an encoding error is displayed. The referenced field must either be `null` or a valid JSON value decodable by the field.
+The decoder accepts the meta info field regardless of its position in the JSON object (it can even appear before the field it refers to). If the meta info field's value is not `unbound`, or it refers to a field that does not exist or does not have this attribute set, then an encoding error is displayed. The referenced field must either be `null` or a valid JSON value decodable by the field.
 
 Example:
 [source]
@@ -5621,9 +5626,9 @@ with {
 }
 
 // { num := 6, str := <unbound> } is encoded into:
-// {“num”:6,”str”:null,”metainfo str”:”unbound”}
+// {"num":6,"str":null,"metainfo str":"unbound"}
 
-// Example 2: meta info for the whole set (with “name as” and optional field)
+// Example 2: meta info for the whole set (with "name as" and optional field)
 type set Set {
   integer num,
   charstring str,
@@ -5635,17 +5640,17 @@ with {
 }
 
 // { num := <unbound>, str := "abc", octets := <unbound> } is encoded into:
-// {“int”:null,”metainfo int”:”unbound”,”str”:”abc”,”octets”:null,
-// ”metainfo octets”:”unbound”}
+// {"int":null,"metainfo int":"unbound","str":"abc","octets":null,
+// "metainfo octets":"unbound"}
 
 // Example 3: other values accepted by the decoder
 // (these cannot be produced by the encoder)
 
 // { "int" : 3, "str" : "abc", "octets" : "1234", "metainfo int" : "unbound" }
-// is decoded into: { num := <unbound>, str := “abc”, octets := ‘1234’O }
+// is decoded into: { num := <unbound>, str := "abc", octets := '1234'O }
 
 // {"metainfo int" : "unbound", "int" : null, "str" : "abc", "octets" : "1234"}
-// is decoded into: { num := <unbound>, str := “abc”, octets := ‘1234’O }
+// is decoded into: { num := <unbound>, str := "abc", octets := '1234'O }
 ----
 
 *As number*
@@ -5654,7 +5659,7 @@ Attribute syntax: as number
 
 Applicable to (TTCN-3): Enumerated types
 
-Description: If set, the enumerated value’s numeric form will be encoded as a JSON number, instead of its name form as a JSON string.
+Description: If set, the enumerated value's numeric form will be encoded as a JSON number, instead of its name form as a JSON string.
 
 Similarly, the decoder will only accept JSON numbers equal to an enumerated value, if this attribute is set.
 
@@ -5663,7 +5668,7 @@ Example:
 ----
 type enumerated Length { Short (0), Medium, Long(10) }
 with {
-  variant “JSON: as number”
+  variant "JSON: as number"
 }
 type record of Length Lengths;
 const Lengths c_len := { Short, Medium, Long };
@@ -5676,15 +5681,15 @@ Attribute syntax: chosen (<parameters>)
 
 Applicable to (TTCN-3): Union fields of records and sets
 
-Description: This attribute indicates that the fields of the target `union` will be encoded without field names (as if the `union` had the attribute as `value`), and that the selected field in the `union` will be determined by the values of other fields in the parent `record`/`set`, as described by the rules in the attribute’s parameters.
+Description: This attribute indicates that the fields of the target `union` will be encoded without field names (as if the `union` had the attribute as `value`), and that the selected field in the `union` will be determined by the values of other fields in the parent `record`/`set`, as described by the rules in the attribute's parameters.
 
-The attribute’s parameters are a list of rules, separated by semicolons (;). Each rule consists of a field name from the `union` (or `omit`, if the `union` is an optional field in the parent `record`/`set`), and a condition (or list of conditions). If the condition is true, then the specified field will be selected (or the field will be omitted). If there are multiple conditions, then only one of them needs to be true for the specified field to be selected.
+The attribute's parameters are a list of rules, separated by semicolons (;). Each rule consists of a field name from the `union` (or `omit`, if the `union` is an optional field in the parent `record`/`set`), and a condition (or list of conditions). If the condition is true, then the specified field will be selected (or the field will be omitted). If there are multiple conditions, then only one of them needs to be true for the specified field to be selected.
 
 The rules have the following syntax:
 
 _<field or omit>, <condition>;_
 
-if there’s only one condition, *or*
+if there's only one condition, *or*
 
 _<field or omit>, { <condition1>, <condition2>, … };_
 
@@ -5710,11 +5715,11 @@ type record PduWithId {
   Choices field optional
 }
 with {
-  variant (field) “chosen ( type1, { protocolId = 1, protocolId = 11 };
+  variant (field) "chosen ( type1, { protocolId = 1, protocolId = 11 };
                             type2, protocolId = 2;
                             type3, protocolId = 3;
-                            omit, otherwise)”;
-  // variant (protocolId) “default (2)”;
+                            omit, otherwise)";
+  // variant (protocolId) "default (2)";
 }
 type union Choices {
   StructType1 type1,
@@ -5724,14 +5729,14 @@ type union Choices {
 // When decoding a value of type PduWithId, type1 will be selected if
 // protocolId is 1 or 11, type2 if protocolId is 2, type3 if protocolId is 3,
 // and the field will be omitted in all other cases.
-// For example { “protocolId” : 2, “field” : { ... } } is decoded into:
+// For example { "protocolId" : 2, "field" : { ... } } is decoded into:
 // { protocolId := 2, field := { type2 := { ... } } }
 // Note: the conditions in the attribute are evaluated when the decoder reaches
 // the union field, so the protocolId field must precede the union field in the
 // JSON document. Otherwise the decoder will use whatever value the protocolId
 // field had before decoding began (likely <unbound>, which will cause a DTE).
 
-// Note: If the protocolId field had the attribute ‘default’ (see commented
+// Note: If the protocolId field had the attribute 'default' (see commented
 // line in the example), then the default value would be used to determine the
 // selected field in the union, if the protocolId field is not decoded before
 // the union field.
@@ -5868,7 +5873,7 @@ type record PhoneNumber {
   integer networkPrefix,
   integer localNumber
 } with {
-  variant(countryPrefix) “JSON:omit as null”
+  variant(countryPrefix) "JSON:omit as null"
 }
 type record Profile {
   charstring name,
@@ -5880,7 +5885,7 @@ type record Profile {
   variant(emailAddr) "JSON: name as email";
 }
 external function f_enc_profile(in Profile par) return octetstring
-  with { extension “prototype(convert) encode(JSON) printing(pretty)” }
+  with { extension "prototype(convert) encode(JSON) printing(pretty)" }
 …
 var Profile prof := { "John Doe", { omit, 20, 1234567 }, "jdoe@mail.com", { { "December", 31, Saturday }, { "February", 7, Friday } } };
 log(f_enc_profile(prof));
@@ -5926,7 +5931,7 @@ If option `–f` is set, then the schema will only validate types that have JSON
 
 The options `-A` and `–T` can be used before each input file to specify its type (`-A` for ASN.1 files and `–T` for TTCN-3 files). If a file is not preceeded by either of these option, then the compiler will attempt to determine its type based on its contents.
 
-The last parameter specifies the name of the JSON schema file if it is preceded by a dash (-). Otherwise the name of the schema will be created using the first input file name (its `.asn` or `.ttcn` extension will be replaced by `.json`, or, if it doesn’t have either of these extension, then `.json` will simply be appended to its end).
+The last parameter specifies the name of the JSON schema file if it is preceded by a dash (-). Otherwise the name of the schema will be created using the first input file name (its `.asn` or `.ttcn` extension will be replaced by `.json`, or, if it doesn't have either of these extension, then `.json` will simply be appended to its end).
 
 Usage examples:compiler –ttcn2json –T module1.ttcn –A module2.asn – schema.jsoncompiler –-ttcn2json –j module1.ttcn module2.asn
 
@@ -5937,9 +5942,9 @@ The first example will generate the `schema.json` JSON document containing the s
 
 On the top level the schema contains a JSON object with 2 properties.
 
-The first property, "definitions", has the schema segments of the type definitions in the TTCN-3 and ASN.1 modules as its value. This value is a JSON object with one property (key-value pair) for each module. Each property has the module name as its key and an object containing the schema segments for the types definied in that module as its key. Similarly, each type definition’s key is the type name and its value is the type’s schema segment (these will be described in the next sections).
+The first property, "definitions", has the schema segments of the type definitions in the TTCN-3 and ASN.1 modules as its value. This value is a JSON object with one property (key-value pair) for each module. Each property has the module name as its key and an object containing the schema segments for the types definied in that module as its key. Similarly, each type definition's key is the type name and its value is the type's schema segment (these will be described in the next sections).
 
-The second top level property is an "anyOf" structure, which contains references to the TTCN-3 and ASN.1 types’ schema segments under "definitions". The types listed here are the ones validated by the schema. If the compiler option `–f` is set, then only the schema segments of types that have either a JSON encoding or decoding function (or both) will be referenced (ASN.1 types can have JSON encoding/decoding functions declared in TTCN-3 modules that import them). Extra information related to the encoding/decoding function(s) is stored after each reference.
+The second top level property is an "anyOf" structure, which contains references to the TTCN-3 and ASN.1 types' schema segments under "definitions". The types listed here are the ones validated by the schema. If the compiler option `–f` is set, then only the schema segments of types that have either a JSON encoding or decoding function (or both) will be referenced (ASN.1 types can have JSON encoding/decoding functions declared in TTCN-3 modules that import them). Extra information related to the encoding/decoding function(s) is stored after each reference.
 
 Example:
 [source]
@@ -5950,61 +5955,61 @@ module MyModule {
     integer num
   }
   external function f_enc_h(in Height h) return octetstring
-    with { extension “prototype(convert) encode(JSON)” }
+    with { extension "prototype(convert) encode(JSON)" }
   external function f_dec_n(in octetstring o) return Num
-    with { extension “prototype(convert) decode(JSON)” }
+    with { extension "prototype(convert) decode(JSON)" }
 } with {
-  encode ”JSON”
+  encode "JSON"
 }
 // Generated JSON schema:
 // {
-//     “definitions” : {
-//         “MyModule” : {
-//             “Height” : {
-//                 “enum” : [
-//                     “Short”,
-//                     “Medium”,
-//                     “Tall”
+//     "definitions" : {
+//         "MyModule" : {
+//             "Height" : {
+//                 "enum" : [
+//                     "Short",
+//                     "Medium",
+//                     "Tall"
 //                 ],
-//                 “numericValues” : [
+//                 "numericValues" : [
 //                     0,
 //                     1,
 //                     2
 //                 ]
 //             },
-//             “Num” : {
-//                 “type” : “object”,
-//                 “subType” : “set”,
-//                 “properties” : {
-//                     “num” : {
-//                         “type” : “integer”
+//             "Num" : {
+//                 "type" : "object",
+//                 "subType" : "set",
+//                 "properties" : {
+//                     "num" : {
+//                         "type" : "integer"
 //                     }
 //                 },
-//                 “additionalProperties” : false,
-//                 “required” : [
-//                     “num”
+//                 "additionalProperties" : false,
+//                 "required" : [
+//                     "num"
 //                 ]
 //             }
 //         }
 //     },
-//     “anyOf” : [
+//     "anyOf" : [
 //         {
-//             “$ref” : “#/definitions/MyModule/Height”,
-//             ”encoding” : {
-//                 ”prototype” : [
-//                     ”convert”,
-//                     ”f_enc_h”,
-//                     ”h”
+//             "$ref" : "#/definitions/MyModule/Height",
+//             "encoding" : {
+//                 "prototype" : [
+//                     "convert",
+//                     "f_enc_h",
+//                     "h"
 //                 ]
 //             }
 //         },
 //         {
-//             “$ref” : “#/definitions/MyModule/Num”,
-//             ”decoding” : {
-//                 ”prototype” : [
-//                     ”convert”,
-//                     ”f_dec_n”,
-//                     ”o”
+//             "$ref" : "#/definitions/MyModule/Num",
+//             "decoding" : {
+//                 "prototype" : [
+//                     "convert",
+//                     "f_dec_n",
+//                     "o"
 //                 ]
 //             }
 //         }
@@ -6022,13 +6027,13 @@ In addition to the "definitions" keyword specified above, the schema segments of
 * `"subType"`: distinguishes 2 or more types from each other, that otherwise have no other differences in their schema segments (such as: charstring and universal charstring; record and set; record of and set of)
 * `"fieldOrder"`: stores the order of the fields of a record or set (value: an array containing the field names) – only needed if there are at least 2 fields
 * `"originalName"`: stores the original name of a record/set field (see <<effect-of-coding-instructions-on-the-schema, here>>)
-* `"unusedAlias"`: stores the alias of a record/set/union field name, if it doesn’t appear under a "properties" keyword (see <<effect-of-coding-instructions-on-the-schema, here>>)
+* `"unusedAlias"`: stores the alias of a record/set/union field name, if it doesn't appear under a "properties" keyword (see <<effect-of-coding-instructions-on-the-schema, here>>)
 * `"omitAsNull"`: specifies if the "omit as null" JSON encoding instruction is present for an optional field of a record or set (see <<schema-segments-for-records-and-sets, here>> and <<effect-of-coding-instructions-on-the-schema, here>>)
 * `"numericValues"`: lists the numeric values of the enumerated items (in the same order as the items themselves)
 
-A schema segment is generated for each type that has its own definition in TTCN-3. References to other types in TTCN-3 type definitions are converted into references in the JSON schema. Schema segments for embedded TTCN-3 type definitions are defined inside their parent type’s schema segment (see <<schema-segments-for-records-and-sets, here>> and <<schema-segments-for-records-of-sets-of-and-arrays, here>> for examples).
+A schema segment is generated for each type that has its own definition in TTCN-3. References to other types in TTCN-3 type definitions are converted into references in the JSON schema. Schema segments for embedded TTCN-3 type definitions are defined inside their parent type's schema segment (see <<schema-segments-for-records-and-sets, here>> and <<schema-segments-for-records-of-sets-of-and-arrays, here>> for examples).
 
-The examples in the following sections will only contain JSON schema segments, not complete schemas (generated for one or more TTCN-3/ASN.1 type definitions, not the whole module). These schema segments contain the type name and the schema that validates the type. In a complete JSON schema these segments would be directly under the module’s property, which is under "definitions" (for examples see section <<top-level, Top Level>>, types "Height" and "Num").
+The examples in the following sections will only contain JSON schema segments, not complete schemas (generated for one or more TTCN-3/ASN.1 type definitions, not the whole module). These schema segments contain the type name and the schema that validates the type. In a complete JSON schema these segments would be directly under the module's property, which is under "definitions" (for examples see section <<top-level, Top Level>>, types "Height" and "Num").
 
 ==== Schema segments for basic types
 
@@ -6037,76 +6042,76 @@ The JSON encoding of basic types is detailed in section <<basic-types, Basic Typ
 ----
 // integer(TTCN-3) and INTEGER(ANS.1):
 // {
-//     “type” : “integer”
+//     "type" : "integer"
 // }
 // float(TTCN-3) and REAL(ASN.1):
 // {
-//     “anyOf” : [
+//     "anyOf" : [
 //         {
-//             “type” : “number”
+//             "type" : "number"
 //         },
 //         {
-//             “enum” : [
-//                 “not_a_number”,
-//                 “infinity”,
-//                 “-infinity”
+//             "enum" : [
+//                 "not_a_number",
+//                 "infinity",
+//                 "-infinity"
 //             ]
 //         }
 //     ]
 // }
 // boolean(TTCN-3) and BOOLEAN(ASN.1):
 // {
-//     “type” : “boolean”
+//     "type" : "boolean"
 // }
 // charstring(TTCN-3), NumericString(ASN.1), PrintableString(ASN.1),
 // IA5String(ASN.1) and VisibleString(ASN.1):
 // {
-//     “type” : “string”,
-//     “subType” : “charstring”
+//     "type" : "string",
+//     "subType" : "charstring"
 // }
 // universal charstring(TTCN-3), GeneralString(ASN.1), UTF8String(ASN.1),
 // UniversalString(ASN.1), BMPString(ASN.1), GraphicString(ASN.1),
 // TeletexString(ASN.1) and VideotexString(ASN.1):
 // {
-//     “type” : “string”,
-//     “subType” : “universal charstring”
+//     "type" : "string",
+//     "subType" : "universal charstring"
 // }
 // bitstring(TTCN-3) and BIT STRING(ASN.1):
 // {
-//     “type” : “string”,
-//     “subType” : “bitstring”,
-//     “pattern” : “^[01]*$”
+//     "type" : "string",
+//     "subType" : "bitstring",
+//     "pattern" : "^[01]*$"
 // }
 // hexstring(TTCN-3):
 // {
-//     “type” : “string”,
-//     “subType” : “hexstring”,
-//     “pattern” : “^[0-9A-Fa-f]*$”
+//     "type" : "string",
+//     "subType" : "hexstring",
+//     "pattern" : "^[0-9A-Fa-f]*$"
 // }
 // octetstring(TTCN-3), OCTET STRING(ASN.1) and ANY(ASN.1):
 // {
-//     “type” : “string”,
-//     “subType” : “octetstring”,
-//     “pattern” : “^([0-9A-Fa-f][0-9A-Fa-f])*$”
+//     "type" : "string",
+//     "subType" : "octetstring",
+//     "pattern" : "^([0-9A-Fa-f][0-9A-Fa-f])*$"
 // }
 // NULL(ASN.1):
 // {
-//     “type” : “null”
+//     "type" : "null"
 // }
 // objid(TTCN-3), OBJECT IDENTIFIER(ASN.1) and RELATIVE-OID(ASN.1):
 // {
-//     “type” : “string”,
-//     “subType” : “objid”,
-//     “pattern” : “^[0-2][.][1-3]?[0-9]([.][0-9]|([1-9][0-9]+))*$”
+//     "type" : "string",
+//     "subType" : "objid",
+//     "pattern" : "^[0-2][.][1-3]?[0-9]([.][0-9]|([1-9][0-9]+))*$"
 // }
 // verdicttype:
 // {
-//     “enum” : [
-//         “none”,
-//         “pass”,
-//         “inconc”,
-//         “fail”,
-//         “error”
+//     "enum" : [
+//         "none",
+//         "pass",
+//         "inconc",
+//         "fail",
+//         "error"
 //     ]
 // }
 // Enumerated types are converted the same way as the verdicttype with the
@@ -6119,15 +6124,15 @@ type enumerated Season {
 Season ::= ENUMERATED {
   spring (1), summer (2), fall (3), winter (4)
 }
-// JSON schema segment for type “Season”:
-// “Season” : {
-//     “enum” : [
-//         “spring”,
-//         “summer”,
-//         “fall”,
-//         “winter”
+// JSON schema segment for type "Season":
+// "Season" : {
+//     "enum" : [
+//         "spring",
+//         "summer",
+//         "fall",
+//         "winter"
 //     ],
-//     “numericValues” : [
+//     "numericValues" : [
 //         1,
 //         2,
 //         3,
@@ -6138,9 +6143,9 @@ Season ::= ENUMERATED {
 [[schema-segments-for-records-and-sets]]
 ==== Schema segments for records and sets
 
-The JSON object type is used for records and sets. The "properties" keyword specifies the fields of the record (each property’s key is the field name, and the value is the field’s schema segment). Additional properties are not accepted ("additionalProperties" : false). The "required" keyword determines which fields are mandatory (the names of all non-optional fields are listed here).
+The JSON object type is used for records and sets. The "properties" keyword specifies the fields of the record (each property's key is the field name, and the value is the field's schema segment). Additional properties are not accepted ("additionalProperties" : false). The "required" keyword determines which fields are mandatory (the names of all non-optional fields are listed here).
 
-Optional fields have an "anyOf" structure directly under "properties" (instead of the field’s schema segment). The "anyOf" structure contains the JSON null value and the field’s schema segment. The "omitAsNull" keyword is used to specify how omitted optional values are encoded (after the "anyOf" structure).
+Optional fields have an "anyOf" structure directly under "properties" (instead of the field's schema segment). The "anyOf" structure contains the JSON null value and the field's schema segment. The "omitAsNull" keyword is used to specify how omitted optional values are encoded (after the "anyOf" structure).
 
 Examples:
 [source]
@@ -6160,57 +6165,57 @@ Product ::= SEQUENCE {
   id OCTET STRING OPTIONAL,
   from VisibleString
 }
-// Schema segment for type “Product”:
-// “Product” : {
-//     “type” : “object”,
-//     “subType” : “record”,
-//     “properties” : {
-//         “name” : {
-//             “type” : “string”,
-//             “subType” : “charstring”
+// Schema segment for type "Product":
+// "Product" : {
+//     "type" : "object",
+//     "subType" : "record",
+//     "properties" : {
+//         "name" : {
+//             "type" : "string",
+//             "subType" : "charstring"
 //         },
-//         “price” : {
-//             “anyOf” : [
+//         "price" : {
+//             "anyOf" : [
 //                 {
-//                     “type” : “number”
+//                     "type" : "number"
 //                 },
 //                 {
-//                     “enum” : [
-//                     “not_a_number”,
-//                     “infinity”,
-//                     “-infinity”
+//                     "enum" : [
+//                     "not_a_number",
+//                     "infinity",
+//                     "-infinity"
 //                 }
 //             ],
 //         }
-//         “id” : {
-//             “anyOf” : [
+//         "id" : {
+//             "anyOf" : [
 //                 {
-//                     “type” : “null”
+//                     "type" : "null"
 //                 },
 //                 {
-//                     “type” : “string”,
-//                     “subType” : “octetstring”,
-//                     “pattern” : “^([0-9A-Fa-f][0-9A-Fa-f])*$”
+//                     "type" : "string",
+//                     "subType" : "octetstring",
+//                     "pattern" : "^([0-9A-Fa-f][0-9A-Fa-f])*$"
 //                 },
 //             ],
-//             “omitAsNull” : false
+//             "omitAsNull" : false
 //         },
-//         “from” : {
-//             “type” : “string”,
-//             “subType” : “charstring”
+//         "from" : {
+//             "type" : "string",
+//             "subType" : "charstring"
 //         }
 //     },
-//     “additionalProperties” : false,
-//     “fieldOrder” : [
-//         “name”,
-//         “price”,
-//         “id”,
-//         “from”
+//     "additionalProperties" : false,
+//     "fieldOrder" : [
+//         "name",
+//         "price",
+//         "id",
+//         "from"
 //     ],
-//     “required” : [
-//         “name”,
-//         “price”,
-//         “from”
+//     "required" : [
+//         "name",
+//         "price",
+//         "from"
 //     ]
 // }
 // Example 2: embedded type definition
@@ -6230,57 +6235,57 @@ Barrels ::= SET {
     filled BOOLEAN
   }
 }
-// JSON schema segment for type “Barrels”:
-// “Barrels” : {
-//     “type” : “object”,
-//     “subType” : “set”,
-//     “properties” : {
-//         “numBarrels” : {
-//             “type” : “integer”
+// JSON schema segment for type "Barrels":
+// "Barrels" : {
+//     "type" : "object",
+//     "subType" : "set",
+//     "properties" : {
+//         "numBarrels" : {
+//             "type" : "integer"
 //         },
-//         “barrelType” : {
-//             “type” : “object”,
-//             “subType” : “record”,
-//             “properties” : {
-//                 “size” : {
-//                     “enum” : [
-//                         “Small”,
-//                         “Medium”,
-//                         “Large”
+//         "barrelType" : {
+//             "type" : "object",
+//             "subType" : "record",
+//             "properties" : {
+//                 "size" : {
+//                     "enum" : [
+//                         "Small",
+//                         "Medium",
+//                         "Large"
 //                     ],
-//                     “numericValues” : [
+//                     "numericValues" : [
 //                         0,
 //                         1,
 //                         2
 //                     ]
 //                 },
-//                 “filled” : {
-//                     “type” : “boolean”
+//                 "filled" : {
+//                     "type" : "boolean"
 //                 }
 //             },
-//             “additionalProperties” : false,
-//             “fieldOrder” : [
-//                 “size”,
-//                 “filled”
+//             "additionalProperties" : false,
+//             "fieldOrder" : [
+//                 "size",
+//                 "filled"
 //             ],
-//             “required” : [
-//                 “size”,
-//                 “filled”
+//             "required" : [
+//                 "size",
+//                 "filled"
 //             ]
 //         }
 //     },
-//     “additionalProperties” : false,
-//     “fieldOrder” : [
-//         “numBarrels”,
-//         “barrelType”
+//     "additionalProperties" : false,
+//     "fieldOrder" : [
+//         "numBarrels",
+//         "barrelType"
 //     ],
-//     “required” : [
-//         “numBarrels”,
-//         “barrelType”
+//     "required" : [
+//         "numBarrels",
+//         "barrelType"
 //     ]
 // }
 // Example 3: separate type definitions and references
-// (the module name is “MyModule”)
+// (the module name is "MyModule")
 // TTCN-3:
 type enumerated Size { Small, Medium, Large };
 type record BarrelType {
@@ -6301,59 +6306,59 @@ Barrels ::= SET {
   numBarrels INTEGER,
   barrelType BarrelType
 }
-// Schema segments for types “Size”, “BarrelType” and “Barrels”:
-// ”Size” : {
-//     ”enum” : [
-//         ”Small”,
-//         ”Medium”,
-//         ”Large”
+// Schema segments for types "Size", "BarrelType" and "Barrels":
+// "Size" : {
+//     "enum" : [
+//         "Small",
+//         "Medium",
+//         "Large"
 //     ],
-//     “numericValues” : [
+//     "numericValues" : [
 //         0,
 //         1,
 //         2
 //     ]
 // }
-// “BarrelType” : {
-//     “type” : “object”,
-//     “subType” : “record”,
-//     “properties” : {
-//         “size” : {
-//             “$ref” : “#/definitions/MyModule/Size”
+// "BarrelType" : {
+//     "type" : "object",
+//     "subType" : "record",
+//     "properties" : {
+//         "size" : {
+//             "$ref" : "#/definitions/MyModule/Size"
 //         },
-//         “filled” : {
-//             “type” : “boolean”
+//         "filled" : {
+//             "type" : "boolean"
 //         }
 //     },
-//     ”additionalProperties” : false,
-//     ”fieldOrder” : [
-//         ”size”,
-//         ”filled”
+//     "additionalProperties" : false,
+//     "fieldOrder" : [
+//         "size",
+//         "filled"
 //     ],
-//     ”required” : [
-//         ”size”,
-//         ”filled”
+//     "required" : [
+//         "size",
+//         "filled"
 //     ]
 // },
-// ”Barrels” : {
-//     ”type” : ”object”,
-//     ”subType” : ”set”,
-//     ”properties” : {
-//         ”numBarrels” : {
-//             ”type” : ”integer”
+// "Barrels" : {
+//     "type" : "object",
+//     "subType" : "set",
+//     "properties" : {
+//         "numBarrels" : {
+//             "type" : "integer"
 //         },
-//         ”barrelType” : {
-//             ”$ref” : ”#/definitions/MyModule/BarrelType”
+//         "barrelType" : {
+//             "$ref" : "#/definitions/MyModule/BarrelType"
 //         }
 //     },
-//     ”additionalProperties” : false,
-//     ”fieldOrder” : [
-//         ”numBarrels”,
-//         ”barrelType”
+//     "additionalProperties" : false,
+//     "fieldOrder" : [
+//         "numBarrels",
+//         "barrelType"
 //     ],
-//     ”required” : [
-//         ”numBarrels”,
-//         ”barrelType”
+//     "required" : [
+//         "numBarrels",
+//         "barrelType"
 //     ]
 // }
 ----
@@ -6373,25 +6378,25 @@ Examples:
 type record of bitstring Bits;
 // ASN.1:
 Bits ::= SEQUENCE OF BIT STRING
-// Schema segment for type “Bits”:
-// “Bits” : {
-//     “type” : “array”,
-//     “subType” : “record of”,
-//     “items” : {
-//         “type” : “string”,
-//         “subType” : “bitstring”,
-//         “pattern” : “^[01]*$”
+// Schema segment for type "Bits":
+// "Bits" : {
+//     "type" : "array",
+//     "subType" : "record of",
+//     "items" : {
+//         "type" : "string",
+//         "subType" : "bitstring",
+//         "pattern" : "^[01]*$"
 //     }
 // }
 // Example 2 (TTCN-3 only):
 type integer Ints[4];
-// Schema segment for type “Ints”:
-// “Ints” : {
-//     “type” : “array”,
-//     “minItems” : 4,
-//     “maxItems” : 4,
-//     “items” : {
-//         “type” : “integer”
+// Schema segment for type "Ints":
+// "Ints" : {
+//     "type" : "array",
+//     "minItems" : 4,
+//     "maxItems" : 4,
+//     "items" : {
+//         "type" : "integer"
 //     }
 // }
 // Example 3:
@@ -6400,12 +6405,12 @@ type integer Ints[4];
 type set of Num Nums;
 // ASN.1:
 Nums ::= SET OF Num
-// JSON schema segment for type “Nums”:
-// “Nums” : {
-//     “type” : “array”,
-//     “subType” : “set of”,
-//     “items” : {
-//         “$ref” : “#/definitions/MyModule/Num”
+// JSON schema segment for type "Nums":
+// "Nums" : {
+//     "type" : "array",
+//     "subType" : "set of",
+//     "items" : {
+//         "$ref" : "#/definitions/MyModule/Num"
 //     }
 // }
 // Example 4:
@@ -6414,21 +6419,21 @@ Nums ::= SET OF Num
 type set of set { integer num } Nums;
 // ASN.1:
 Nums ::= SET OF SET { num INTEGER }
-// JSON schema segment for type “Nums”:
-// “Nums” : {
-//     “type” : “array”,
-//     “subType” : “set of”,
-//     “items” : {
-//         “type” : “object”,
-//         “subType” : “set”,
-//         “properties” : {
-//             “num” : {
-//                 “type” : “integer”
+// JSON schema segment for type "Nums":
+// "Nums" : {
+//     "type" : "array",
+//     "subType" : "set of",
+//     "items" : {
+//         "type" : "object",
+//         "subType" : "set",
+//         "properties" : {
+//             "num" : {
+//                 "type" : "integer"
 //             }
 //         },
-//         “additionalProperties” : false,
-//         “required” : [
-//             “num”
+//         "additionalProperties" : false,
+//         "required" : [
+//             "num"
 //         ]
 //     }
 // }
@@ -6456,66 +6461,66 @@ Thing ::= CHOICE {
   cs VisibleString,
   rec SEQUENCE { num INTEGER }
 }
-// Schema segment for type “Thing”:
-// “Thing” : {
-//     “anyOf” : [
+// Schema segment for type "Thing":
+// "Thing" : {
+//     "anyOf" : [
 //         {
-//             “type” : “object”,
-//             “properties” : {
-//                 “b” : {
-//                     “type” : “boolean”
+//             "type" : "object",
+//             "properties" : {
+//                 "b" : {
+//                     "type" : "boolean"
 //                 }
 //             },
-//             “additionalProperties” : false,
-//             “required” : [
-//                 “b”
+//             "additionalProperties" : false,
+//             "required" : [
+//                 "b"
 //             ]
 //         },
 //         {
-//             “type” : “object”,
-//             “properties” : {
-//                 “i” : {
-//                     “type” : “integer”
+//             "type" : "object",
+//             "properties" : {
+//                 "i" : {
+//                     "type" : "integer"
 //                 }
 //             },
-//             “additionalProperties” : false,
-//             “required” : [
-//                 “i”
+//             "additionalProperties" : false,
+//             "required" : [
+//                 "i"
 //             ]
 //         },
 //         {
-//             “type” : “object”,
-//             “properties” : {
-//                 “cs” : {
-//                     “type” : “string”,
-//                     “subType” : “charstring”
+//             "type" : "object",
+//             "properties" : {
+//                 "cs" : {
+//                     "type" : "string",
+//                     "subType" : "charstring"
 //                 }
 //             },
-//             “additionalProperties” : false,
-//             “required” : [
-//                 “cs”
+//             "additionalProperties" : false,
+//             "required" : [
+//                 "cs"
 //             ]
 //         },
 //         {
-//             “type” : “object”,
-//             “properties” : {
-//                 “rec” : {
-//                     “type” : “object”,
-//                     “subType” : “record”,
-//                     “properties” : {
-//                         “num” : {
-//                             “type” : “integer”
+//             "type" : "object",
+//             "properties" : {
+//                 "rec" : {
+//                     "type" : "object",
+//                     "subType" : "record",
+//                     "properties" : {
+//                         "num" : {
+//                             "type" : "integer"
 //                         }
 //                     },
-//                     “additionalProperties” : false,
-//                     “required” : [
-//                         “num”
+//                     "additionalProperties" : false,
+//                     "required" : [
+//                         "num"
 //                     ]
 //                 }
 //             },
-//             “additionalProperties” : false,
-//             “required” : [
-//                 “rec”
+//             "additionalProperties" : false,
+//             "required" : [
+//                 "rec"
 //             ]
 //         }
 //     ]
@@ -6524,36 +6529,36 @@ Thing ::= CHOICE {
 module … {
   …
 } with {
-  extension “anytype integer,charstring”
+  extension "anytype integer,charstring"
   // the anytype must be referenced at least one,
-  // otherwise its schema segment won’t be generated
+  // otherwise its schema segment won't be generated
 }
 // JSON schema segment for the anytype:
-// “anytype” : {
-//     “anyOf” : [
+// "anytype" : {
+//     "anyOf" : [
 //         {
-//             “type” : “object”,
-//             “properties” : {
-//                 “integer” : {
-//                     “type” : “integer”
+//             "type" : "object",
+//             "properties" : {
+//                 "integer" : {
+//                     "type" : "integer"
 //                 }
 //             },
-//             “additionalProperties” : false,
-//             “required” : [
-//                 “integer”
+//             "additionalProperties" : false,
+//             "required" : [
+//                 "integer"
 //             ]
 //         },
 //         {
-//             “type” : “object”,
-//             “properties” : {
-//                 “charstring” : {
-//                     “type” : “string”,
-//                     “subType” : “charstring”
+//             "type" : "object",
+//             "properties" : {
+//                 "charstring" : {
+//                     "type" : "string",
+//                     "subType" : "charstring"
 //                 }
 //             },
-//             “additionalProperties” : false,
-//             “required” : [
-//                 “charstring”
+//             "additionalProperties" : false,
+//             "required" : [
+//                 "charstring"
 //             ]
 //         }
 //     ]
@@ -6568,17 +6573,17 @@ Example:
 // Continuing example 1 (ASN.1 only):
 NumRec ::= rec < Thing
 // JSON schema segment for type NumRec:
-// “NumRec” : {
-//     “type” : “object”,
-//     “subType” : “record”,
-//     “properties” : {
-//         “num” : {
-//             “type” : “integer”
+// "NumRec" : {
+//     "type" : "object",
+//     "subType" : "record",
+//     "properties" : {
+//         "num" : {
+//             "type" : "integer"
 //         }
 //     },
-//     “additionalProperties” : false,
-//     “required” : [
-//         “num”
+//     "additionalProperties" : false,
+//     "required" : [
+//         "num"
 //     ]
 // }
 ----
@@ -6586,13 +6591,13 @@ NumRec ::= rec < Thing
 [[effect-of-coding-instructions-on-the-schema]]
 ==== Effect of coding instructions on the schema
 
-For the list of JSON coding instructions see <<attributes-1, here>>. As mentioned before, only TTCN-3 types can have coding instructions, ASN.1 types can’t.
+For the list of JSON coding instructions see <<attributes-1, here>>. As mentioned before, only TTCN-3 types can have coding instructions, ASN.1 types can't.
 
-* _omit as null_ – its presence is indicated by the "omitAsNull" keyword (true, if it’s present)
-* _name as …_ - the alias is used under "properties" instead of the field’s name in TTCN-3; the original name is stored under the "originalName" key
-* _as value_ – the union’s "anyOf" structure contains the fields’ schema segments instead of the JSON objects with one property; the field’s name is stored under the "originalName" key
+* _omit as null_ – its presence is indicated by the "omitAsNull" keyword (true, if it's present)
+* _name as …_ - the alias is used under "properties" instead of the field's name in TTCN-3; the original name is stored under the "originalName" key
+* _as value_ – the union's "anyOf" structure contains the fields' schema segments instead of the JSON objects with one property; the field's name is stored under the "originalName" key
 * _default_ – specified by the "default" keyword
-* _extend_ – adds a custom key-value pair to the type’s schema segment
+* _extend_ – adds a custom key-value pair to the type's schema segment
 * _as value_ + _name as …_ - the field name aliases are stored under the "unusedAlias" keyword, as there are no more JSON objects with one property to store them in (they are also ignored by both the schema and the encoder/decoder, and a compiler warning is displayed in this case)
 * _metainfo for unbound_ – is ignored by the schema generator
 
@@ -6604,62 +6609,62 @@ type record Rec {
   integer num optional,
   universal charstring str optional
 } with {
-  variant(num) “JSON : omit as null”
-}
-// Schema segment for type “Rec”:
-// “Rec” : {
-//     “type” : “object”,
-//     “subType” : “record”,
-//     “properties” : {
-//         “num” : {
-//             “anyOf” : [
+  variant(num) "JSON : omit as null"
+}
+// Schema segment for type "Rec":
+// "Rec" : {
+//     "type" : "object",
+//     "subType" : "record",
+//     "properties" : {
+//         "num" : {
+//             "anyOf" : [
 //                 {
-//                     “type” : “null”
+//                     "type" : "null"
 //                 },
 //                 {
-//                     “type” : “integer”
+//                     "type" : "integer"
 //                 }
 //             ],
-//             “omitAsNull” : true
+//             "omitAsNull" : true
 //         },
-//         “str” : {
-//             “anyOf” : [
+//         "str" : {
+//             "anyOf" : [
 //                 {
-//                     “type” : “null”
+//                     "type" : "null"
 //                 },
 //                 {
-//                     “type” : “string”,
-//                     “subType” : “universal charstring”
+//                     "type" : "string",
+//                     "subType" : "universal charstring"
 //                 }
 //             ],
-//             “omitAsNull” : false
+//             "omitAsNull" : false
 //         }
 //     },
-//     “additionalProperties” : false,
-//     “fieldOrder” : [
-//         “num”,
-//         “str”
+//     "additionalProperties" : false,
+//     "fieldOrder" : [
+//         "num",
+//         "str"
 //     ]
 // }
 // Example 2: name as ...
 type set Num {
   integer num
 } with {
-    variant(num) ”JSON : name as number”
-}
-// Schema segment for type “Num”:
-// ”Num” : {
-//     ”type” : ”object”,
-//     ”subType” : ”set”,
-//     “properties” : {
-//         “number” : {
-//             “originalName” : “num”,
-//             “type” : “integer”
+    variant(num) "JSON : name as number"
+}
+// Schema segment for type "Num":
+// "Num" : {
+//     "type" : "object",
+//     "subType" : "set",
+//     "properties" : {
+//         "number" : {
+//             "originalName" : "num",
+//             "type" : "integer"
 //         }
 //     },
-//     “additionalProperties” : false,
-//     “required” : [
-//         “number”
+//     "additionalProperties" : false,
+//     "required" : [
+//         "number"
 //     ]
 // }
 // Example 3: as value and name as ...
@@ -6669,40 +6674,40 @@ type union Thing {
   charstring cs,
   record { integer num } rec
 } with {
-  variant “JSON : as value”;
-  variant(i) “JSON : name as int”;
-  variant(cs) “JSON : name as str”;
+  variant "JSON : as value";
+  variant(i) "JSON : name as int";
+  variant(cs) "JSON : name as str";
 }
-// Schema segment for type “Thing”:
-// “Thing” : {
-//     “anyOf” : [
+// Schema segment for type "Thing":
+// "Thing" : {
+//     "anyOf" : [
 //         {
-//             “originalName” : “b”,
-//             “type” : “boolean”
+//             "originalName" : "b",
+//             "type" : "boolean"
 //         },
 //         {
-//             “originalName” : “i”,
-//             “unusedAlias” : “int”,
-//             “type” : “integer”
+//             "originalName" : "i",
+//             "unusedAlias" : "int",
+//             "type" : "integer"
 //         },
 //         {
-//             “originalName” : “cs”,
-//             “unusedAlias” : “str”,
-//             “type” : “string”,
-//             “subType” : “charstring”
+//             "originalName" : "cs",
+//             "unusedAlias" : "str",
+//             "type" : "string",
+//             "subType" : "charstring"
 //         },
 //         {
-//             “originalName” : “rec”,
-//             “type” : “object”,
-//             “subType” : “record”,
-//             “properties” : {
-//                 “num” : {
-//                     “type” : “integer”
+//             "originalName" : "rec",
+//             "type" : "object",
+//             "subType" : "record",
+//             "properties" : {
+//                 "num" : {
+//                     "type" : "integer"
 //                 }
 //             },
-//             “additionalProperties” : false,
-//             “required” : [
-//                 “num”
+//             "additionalProperties" : false,
+//             "required" : [
+//                 "num"
 //             ]
 //         }
 //     ]
@@ -6712,46 +6717,46 @@ type record Rec {
   integer num,
   universal charstring str
 } with {
-  variant(num) “JSON : default(0)”;
-  variant(str) “JSON : default(empty)”;
-}
-// JSON schema segment for type “Rec”:
-// “Rec” : {
-//     “type” : “object”,
-//     “subType” : “record”,
-//     “properties” : {
-//         “num” : {
-//             “type” : “integer”,
-//             “default” : 0
+  variant(num) "JSON : default(0)";
+  variant(str) "JSON : default(empty)";
+}
+// JSON schema segment for type "Rec":
+// "Rec" : {
+//     "type" : "object",
+//     "subType" : "record",
+//     "properties" : {
+//         "num" : {
+//             "type" : "integer",
+//             "default" : 0
 //         },
-//         “str” : {
-//             “type” : “string”,
-//             “subType” : “universal charstring”,
-//             “default” : “empty”
+//         "str" : {
+//             "type" : "string",
+//             "subType" : "universal charstring",
+//             "default" : "empty"
 //         }
 //     },
-//     “additionalProperties” : false,
-//     “fieldOrder” : [
-//         “num”,
-//         “str”
+//     "additionalProperties" : false,
+//     "fieldOrder" : [
+//         "num",
+//         "str"
 //     ],
-//     “required” : [
-//         “num”,
-//         “str”
+//     "required" : [
+//         "num",
+//         "str"
 //     ]
 // }
 // Example 5: extend
 type record Number {
   integer val
 } with {
-  variant “JSON:extend(comment):(first)”;
-  variant “ JSON : extend (comment) : (second (todo: add more fields\)) ”;
-  variant “JSON: extend(description):(a record housing an integer)”;
-  variant(val) “JSON: extend(description):(an integer)”;
-  variant(val) “JSON: extend(subType):(positive integer)”;
+  variant "JSON:extend(comment):(first)";
+  variant " JSON : extend (comment) : (second (todo: add more fields\)) ";
+  variant "JSON: extend(description):(a record housing an integer)";
+  variant(val) "JSON: extend(description):(an integer)";
+  variant(val) "JSON: extend(subType):(positive integer)";
 }
 
-// Schema segment for type “Number”:
+// Schema segment for type "Number":
 // "Number" : {
 //     "type" : "object",
 //     "subType" : "record",
@@ -6776,7 +6781,7 @@ type record Number {
 // attribute 'extend'
 // warning: Key 'comment' is used multiple times in 'extend' attributes of type
 // '@MyModule.Number'
-// (The multiple uses of ‘description’ don’t generate a warning, since these
+// (The multiple uses of 'description' don't generate a warning, since these
 // belong to different types.)
 ----
 
@@ -6784,11 +6789,11 @@ type record Number {
 
 JSON encoding/decoding functions can only be declared in TTCN-3 modules, however they can be defined for both TTCN-3 types and imported ASN.1 types.
 
-Information related to a type’s JSON encoding/decoding external function is stored after the reference to the type’s schema segment in the top level "anyOf" structure.
+Information related to a type's JSON encoding/decoding external function is stored after the reference to the type's schema segment in the top level "anyOf" structure.
 
 Extra JSON schema keywords for the external function properties:
 
-* `"encoding"` and `"decoding"`: stores the specifics of the encoding or decoding function as properties (directly under the top level `"anyOf"`, after the reference to the type’s schema segment)
+* `"encoding"` and `"decoding"`: stores the specifics of the encoding or decoding function as properties (directly under the top level `"anyOf"`, after the reference to the type's schema segment)
 * `"prototype"`: an array containing the prototype of the encoding function (as a string), the function name, and the parameter names used in its declaration (directly under `"encoding"` or `"decoding"`)
 * `"printing"`: stores the printing settings (values: `"compact"` or `"pretty"`; directly under `"encoding"`)
 * `"errorBehavior"`: an object containing the error behavior modifications as its properties, each modification has the error type as key and the error handling as value (directly under `"encoding"` or `"decoding"`)
@@ -6802,64 +6807,64 @@ module Mod {
     boolean b
   }
   external function f_enc(in Rec x) return octetstring with {
-    extension “prototype(convert) encode(JSON) printing(pretty)”
+    extension "prototype(convert) encode(JSON) printing(pretty)"
   }
   external function f_dec(in octetstring o, out Rec x) with {
-    extension “prototype(fast) decode(JSON)”
-    extension “errorbehavior(ALL:WARNING,INVAL_MSG:ERROR)”
+    extension "prototype(fast) decode(JSON)"
+    extension "errorbehavior(ALL:WARNING,INVAL_MSG:ERROR)"
   }
 
 } with {
-  encode “JSON”
+  encode "JSON"
 }
 // JSON schema:
 // {
-//     “definitions” : {
-//         “Mod” : {
-//             “Rec” : {
-//                 “type” : “object”,
-//                 “subType” : “record”,
-//                 “properties” : {
-//                     “num” : {
-//                         “type” : “integer”
+//     "definitions" : {
+//         "Mod" : {
+//             "Rec" : {
+//                 "type" : "object",
+//                 "subType" : "record",
+//                 "properties" : {
+//                     "num" : {
+//                         "type" : "integer"
 //                     },
-//                     “b” : {
-//                         “type” : “boolean”
+//                     "b" : {
+//                         "type" : "boolean"
 //                     }
 //                 },
-//                 “additionalProperties” : false,
-//                 “fieldOrder” : [
-//                     “num”,
-//                     “b”
+//                 "additionalProperties" : false,
+//                 "fieldOrder" : [
+//                     "num",
+//                     "b"
 //                 ],
-//                 “required” : [
-//                     “num”,
-//                     “b”
+//                 "required" : [
+//                     "num",
+//                     "b"
 //                 ]
 //             }
 //         }
 //     },
-//     “anyOf” : [
+//     "anyOf" : [
 //         {
-//             “$ref” : “#/definitions/Mod/Rec”,
-//             “encoding” : {
-//                 “prototype” : [
-//                     “convert”,
-//                     “f_enc”,
-//                     “x”
+//             "$ref" : "#/definitions/Mod/Rec",
+//             "encoding" : {
+//                 "prototype" : [
+//                     "convert",
+//                     "f_enc",
+//                     "x"
 //                 ],
-//                 “printing” : “pretty”
+//                 "printing" : "pretty"
 //             },
-//             “decoding” : {
-//                 “prototype” : [
-//                     “fast”,
-//                     “f_dec”,
-//                     “o”,
-//                     “x”
+//             "decoding" : {
+//                 "prototype" : [
+//                     "fast",
+//                     "f_dec",
+//                     "o",
+//                     "x"
 //                 ],
-//                 “errorBehavior” : {
-//                     “ALL” : “WARNING”,
-//                     “INVAL_MSG” : “ERROR”
+//                 "errorBehavior" : {
+//                     "ALL" : "WARNING",
+//                     "INVAL_MSG" : "ERROR"
 //                 }
 //             }
 //         }
@@ -6869,7 +6874,7 @@ module Mod {
 
 ==== Schema segments for type restrictions
 
-The compiler’s `–ttcn2json` option also generates schema segments for type restrictions (subtyping constraints), even though these are ignored by the JSON encoder and decoder. Only restrictions of TTCN-3 types are converted to JSON schema format, ASN.1 type restrictions are ignored.
+The compiler's `–ttcn2json` option also generates schema segments for type restrictions (subtyping constraints), even though these are ignored by the JSON encoder and decoder. Only restrictions of TTCN-3 types are converted to JSON schema format, ASN.1 type restrictions are ignored.
 
 The generated schema segments only contain basic JSON schema keywords, no extra keywords are needed.
 
@@ -6882,10 +6887,10 @@ The generated schema segments only contain basic JSON schema keywords, no extra
 |Single values |All single values (more specifically their JSON encodings) are gathered into one JSON `enum`. Keyword valueList is used to store single values of unions with the as value coding instruction (encoded as if they did not have this coding instruction).
 |Value range restrictions of `integers` and `floats` |The keywords minimum and maximum are used to specify the range, and keywords `exclusiveMinimum` and `exclusiveMaximum` indicate whether the limits are exclusive or not. All value range and single value restrictions are placed in an `anyOf` structure, if there are at least two value ranges, or if there is one value range and at least one single value.
 |Value range restrictions of charstrings and universal charstrings |All value range restrictions are gathered into a set expression in a JSON schema `pattern`.
-|String pattern restrictions |The TTCN-3 pattern is converted into an extended regular expression and inserted into the schema as a JSON `pattern`. Since the pattern is a JSON string, it cannot contain control characters. These are replaced with the corresponding JSON escape sequences, if available, or with the escape sequence `\u`, followed by the character’s ASCII code in 4 hexadecimal digits. Furthermore all backslashes in the string are doubled.
+|String pattern restrictions |The TTCN-3 pattern is converted into an extended regular expression and inserted into the schema as a JSON `pattern`. Since the pattern is a JSON string, it cannot contain control characters. These are replaced with the corresponding JSON escape sequences, if available, or with the escape sequence `\u`, followed by the character's ASCII code in 4 hexadecimal digits. Furthermore all backslashes in the string are doubled.
 |===
 
-These schema elements are inserted after the type’s schema segment. If the type’s schema segment only contains a reference to another type (in case it’s a `record`/`set`/`union` field of a type with its own definition or it’s an alias to a type with its own definition), then an `allOf` structure is inserted, which contains the reference as its first element and the restrictions as its second element (since the referenced type may contain some of the schema elements used in this type’s restrictions).
+These schema elements are inserted after the type's schema segment. If the type's schema segment only contains a reference to another type (in case it's a `record`/`set`/`union` field of a type with its own definition or it's an alias to a type with its own definition), then an `allOf` structure is inserted, which contains the reference as its first element and the restrictions as its second element (since the referenced type may contain some of the schema elements used in this type's restrictions).
 
 If the value list restriction contains references to other subtypes, then the schema segments of their restrictions are inserted, too.
 
@@ -6895,8 +6900,8 @@ All non-ASCII characters in `universal` `charstring` single values and patterns
 
 Special cases:
 
-. The restrictions of `floats` are inserted at the end of the first element in the `anyOf` structure, except those that are related to the special values (`infinity`, `-infinity` and `not_a_number`). The `enum` containing the special values is changed, if any of the special values is not allowed by the type’s restrictions. If neither of the special values are allowed, then the `anyOf` structure is omitted, and the type’s schema only contains `type` : `number`, followed by the rest of the restrictions. Similarly, if only special values are allowed by the restrictions, then the type’s schema only contains the `enum` with the valid values.
-. If a verdicttype is restricted (with single values), then only the `enum` containing the list of single values is generated (since it would conflict with the type’s schema segment, which is also an `enum`).
+. The restrictions of `floats` are inserted at the end of the first element in the `anyOf` structure, except those that are related to the special values (`infinity`, `-infinity` and `not_a_number`). The `enum` containing the special values is changed, if any of the special values is not allowed by the type's restrictions. If neither of the special values are allowed, then the `anyOf` structure is omitted, and the type's schema only contains `type` : `number`, followed by the rest of the restrictions. Similarly, if only special values are allowed by the restrictions, then the type's schema only contains the `enum` with the valid values.
+. If a verdicttype is restricted (with single values), then only the `enum` containing the list of single values is generated (since it would conflict with the type's schema segment, which is also an `enum`).
 . If a single value restriction contains one or more `omit` values, then all possible JSON encodings of the single value are inserted into the `enum`. There are 2^N^ different encodings, where _N_ is the number of `omits` in the single value, since each omitted field can be encoded in 2 ways (by not adding the field to the JSON object, or by adding the field with a `null` value).
 . Single value restrictions of unions with the `as value` coding instruction do not specify which alternative the value was encoded from. Thus, the single values are generated a second time, under the extra keyword `valueList`, as if they belonged to a union without `as value` (with alternative names). This second list does not contain all the combinations of omitted field encodings (mentioned in the previous point), only the one, where omitted fields are not added to their JSON objects.
 
@@ -6908,21 +6913,21 @@ Examples:
 type integer PosInt (!0..infinity);
 type PosInt PosIntValues (1, 5, 7, 10);
 
-// Schema segment generated for type “PosInt”:
-// “PosInt” : {
-//     “type” : “integer”,
-//     “minimum” : 0,
-//     “exclusiveMinimum” : true
+// Schema segment generated for type "PosInt":
+// "PosInt" : {
+//     "type" : "integer",
+//     "minimum" : 0,
+//     "exclusiveMinimum" : true
 // }
 
-// Schema segment generated for type “PosIntValues”:
-// “PosIntValues” : {
-//     “allOf” : [
+// Schema segment generated for type "PosIntValues":
+// "PosIntValues" : {
+//     "allOf" : [
 //         {
-//             “$ref” : “#/definitions/MyModule/PosInt”
+//             "$ref" : "#/definitions/MyModule/PosInt"
 //         },
 //         {
-//             “enum” : [
+//             "enum" : [
 //                 1,
 //                 5,
 //                 7,
@@ -6934,44 +6939,44 @@ type PosInt PosIntValues (1, 5, 7, 10);
 
 // Example 2: String type definitions with length, value range and pattern
 // constraints
-type charstring CapitalLetters (“A”..“Z”) length (1..6);
+type charstring CapitalLetters ("A".."Z") length (1..6);
 type charstring CharstringPattern
-  (pattern “*ab?\*\?\(\+[0-9a-fA-F*?\n]#(2,4)\d\w\n\r\s\”x”\\d);
+  (pattern "*ab?\*\?\(\+[0-9a-fA-F*?\n]#(2,4)\d\w\n\r\s\"x"\\d);
 
 type universal charstring UnicodeStringRanges
-  (“a”.. “z”, char(0, 0, 1, 81)..char(0, 0, 1, 113));
+  ("a".. "z", char(0, 0, 1, 81)..char(0, 0, 1, 113));
 type universal charstring UnicodePattern
-  (pattern “abc?\q{ 0, 0, 1, 113 }z\\q1\q{0,0,0,2}”);
-
-// Schema segment generated for type “CapitalLetters”:
-// “CapitalLetters” : {
-//     “type” : “string”,
-//     “subType” : “charstring”,
-//     “minLength” : 1,
-//     “maxLength” : 6,
-//     “pattern” : “^[A-Z]*$”
+  (pattern "abc?\q{ 0, 0, 1, 113 }z\\q1\q{0,0,0,2}");
+
+// Schema segment generated for type "CapitalLetters":
+// "CapitalLetters" : {
+//     "type" : "string",
+//     "subType" : "charstring",
+//     "minLength" : 1,
+//     "maxLength" : 6,
+//     "pattern" : "^[A-Z]*$"
 // }
 
-// Schema segment generated for type “CharstringPattern”:
-// “CharstringPattern” : {
-//     “type” : “string”,
-//     “subType” : “charstring”,
-//     “pattern” : “^.*ab.\\*\\?\\(\\+[\n-\r*0-9?A-Fa-f]{2,4}[0-9][0-9A-Za-z]
-//[\n-\r]\r[\t-\r ]\”x\”\\\\d$”
+// Schema segment generated for type "CharstringPattern":
+// "CharstringPattern" : {
+//     "type" : "string",
+//     "subType" : "charstring",
+//     "pattern" : "^.*ab.\\*\\?\\(\\+[\n-\r*0-9?A-Fa-f]{2,4}[0-9][0-9A-Za-z]
+//[\n-\r]\r[\t-\r ]\"x\"\\\\d$"
 // }
 
-// Schema segment generated for type “UnicodeStringRanges”:
-// “UnicodeStringRanges” : {
-//     “type” : “string”,
-//     “subType” : “universal charstring”,
-//     “pattern” : “^[a-ző-ű]*$”
+// Schema segment generated for type "UnicodeStringRanges":
+// "UnicodeStringRanges" : {
+//     "type" : "string",
+//     "subType" : "universal charstring",
+//     "pattern" : "^[a-ző-ű]*$"
 // }
 
-// Schema segment generated for type “UnicodePattern”:
-// “UnicodePattern” : {
-//     “type” : “string”,
-//     “subType” : “universal charstring”,
-//     “pattern” : “^abc.űz\\\\q1\u0002$”
+// Schema segment generated for type "UnicodePattern":
+// "UnicodePattern" : {
+//     "type" : "string",
+//     "subType" : "universal charstring",
+//     "pattern" : "^abc.űz\\\\q1\u0002$"
 // }
 
 // Example 3: Array type definitions with length restrictions and
@@ -6979,29 +6984,29 @@ type universal charstring UnicodePattern
 type record length (3..infinity) of PosInt PosIntList;
 type set length (2) of integer OnesAndTwos (1, 2);
 
-// Schema segment generated for type “PosIntList”:
-// “PosIntList” : {
-//     “type” : “array”,
-//     “subType” : “record of”,
-//     “items” : {
-//         “$ref” : “#/definitions/MyModule/PosInt”
+// Schema segment generated for type "PosIntList":
+// "PosIntList" : {
+//     "type" : "array",
+//     "subType" : "record of",
+//     "items" : {
+//         "$ref" : "#/definitions/MyModule/PosInt"
 //     },
-//     “minItems” : 3
+//     "minItems" : 3
 // }
 
-// Schema segment generated for type “OnesAndTwos”:
-// “OnesAndTwos” : {
-//     “type” : “array”,
-//     “subType” : “set of”,
-//     “items” : {
-//         “type” : “integer”,
-//         “enum” : [
+// Schema segment generated for type "OnesAndTwos":
+// "OnesAndTwos" : {
+//     "type" : "array",
+//     "subType" : "set of",
+//     "items" : {
+//         "type" : "integer",
+//         "enum" : [
 //             1,
 //             2
 //         ]
 //     },
-//     “minItems” : 2,
-//     “maxItems” : 2
+//     "minItems" : 2,
+//     "maxItems" : 2
 // }
 
 // Example 4: Float type definitions with all kinds of restrictions
@@ -7009,74 +7014,74 @@ type float RestrictedFloat (-infinity..-1.0, 0.0, 0.5, 1.0, not_a_number);
 type float NegativeFloat (!-infinity..!0.0);
 type float InfiniteFloat (-infinity, infinity);
 
-// Schema segment generated for type “RestrictedFloat”:
-// “RestrictedFloat” : {
-//     “anyOf” : [
+// Schema segment generated for type "RestrictedFloat":
+// "RestrictedFloat" : {
+//     "anyOf" : [
 //         {
-//             “type” : “number”,
-//             “anyOf” : [
+//             "type" : "number",
+//             "anyOf" : [
 //                 {
-//                     “enum” : [
+//                     "enum" : [
 //                         0.000000,
 //                         0.500000,
 //                         1.000000,
 //                     ]
 //                 },
 //                 {
-//                     “maximum” : -1.000000,
-//                     “exclusiveMaximum” : false
+//                     "maximum" : -1.000000,
+//                     "exclusiveMaximum" : false
 //                 }
 //             ]
 //         },
 //         {
-//             “enum” : [
-//                 “not_a_number”,
-//                 “-infinity”
+//             "enum" : [
+//                 "not_a_number",
+//                 "-infinity"
 //             ]
 //         }
 //     ]
 // }
 
-// Schema segment generated for type “NegativeFloat”:
-// “NegativeFloat” : {
-//     “type” : “number”,
-//     “maximum” : 0.000000,
-//     “exclusiveMaximum” : true
+// Schema segment generated for type "NegativeFloat":
+// "NegativeFloat" : {
+//     "type" : "number",
+//     "maximum" : 0.000000,
+//     "exclusiveMaximum" : true
 // }
 
-// Schema segment generated for type “InfiniteFloat”:
-// “InfiniteFloat” : {
-//     “enum” : [
-//         “infinity”,
-//         “-infinity”
+// Schema segment generated for type "InfiniteFloat":
+// "InfiniteFloat" : {
+//     "enum" : [
+//         "infinity",
+//         "-infinity"
 //     ]
 // }
 
 // Example 5: verdicttype definition with restrictions (single values)
 type verdicttype SimpleVerdict (pass, fail, error);
 
-// Schema segment generated for type “SimpleVerdict”:
-// “SimpleVerdict” : {
-//     “enum” : [
-//         “pass”,
-//         “fail”,
-//         “error”
+// Schema segment generated for type "SimpleVerdict":
+// "SimpleVerdict" : {
+//     "enum" : [
+//         "pass",
+//         "fail",
+//         "error"
 //     ]
 // }
 
-// Example 6: Union type definition with the “as value” coding instruction and
+// Example 6: Union type definition with the "as value" coding instruction and
 // its subtypes (one of which references the other)
 type union AsValueUnion {
   integer i,
   charstring str
 }
 with {
-  variant “JSON: as value”
+  variant "JSON: as value"
 }
 
 type AsValueUnion AsValueUnionValues (
   { i := 3 },
-  { str := “abc” }
+  { str := "abc" }
 );
 
 type AsValueUnion MoreAsValueUnionValues (
@@ -7084,65 +7089,65 @@ type AsValueUnion MoreAsValueUnionValues (
   { i := 6 }
 );
 
-// Schema segment generated for type “AsValueUnion”:
-// “AsValueUnion” : {
-//     “anyOf” : [
+// Schema segment generated for type "AsValueUnion":
+// "AsValueUnion" : {
+//     "anyOf" : [
 //         {
-//             “originalName” : “i”,
-//             “type” : “integer”
+//             "originalName" : "i",
+//             "type" : "integer"
 //         },
 //         {
-//             “originalName” : “str”,
-//             “type” : “string”,
-//             “subType” : “charstring”
+//             "originalName" : "str",
+//             "type" : "string",
+//             "subType" : "charstring"
 //         }
 //     ]
 // }
 
-// Schema segment generated for type “AsValueUnionValues”:
-// “AsValueUnionValues” : {
-//     “allOf” : [
+// Schema segment generated for type "AsValueUnionValues":
+// "AsValueUnionValues" : {
+//     "allOf" : [
 //         {
-//             “$ref” : “#/definitions/MyModule/AsValueUnion”
+//             "$ref" : "#/definitions/MyModule/AsValueUnion"
 //         },
 //         {
-//             “enum” : [
+//             "enum" : [
 //                 3,
-//                 “abc”
+//                 "abc"
 //             ],
-//             “valueList” : [
+//             "valueList" : [
 //                 {
-//                     “i” : 3
+//                     "i" : 3
 //                 },
 //                 {
-//                     “str” : “abc”
+//                     "str" : "abc"
 //                 }
 //             ]
 //         }
 //     ]
 // }
 
-// Schema segment generated for type “MoreAsValueUnionValues”:
-// “MoreAsValueUnionValues” : {
-//     “allOf” : [
+// Schema segment generated for type "MoreAsValueUnionValues":
+// "MoreAsValueUnionValues" : {
+//     "allOf" : [
 //         {
-//             “$ref” : “#/definitions/MyModule/AsValueUnion”
+//             "$ref" : "#/definitions/MyModule/AsValueUnion"
 //         },
 //         {
-//             “enum” : [
+//             "enum" : [
 //                 3,
-//                 “abc”,
+//                 "abc",
 //                 6
 //             ],
-//             “valueList” : [
+//             "valueList" : [
 //                 {
-//                     “i” : 3
+//                     "i" : 3
 //                 },
 //                 {
-//                     “str” : “abc”
+//                     "str" : "abc"
 //                 },
 //                 {
-//                     “i” : 6
+//                     "i" : 6
 //                 }
 //             ]
 //         }
@@ -7154,116 +7159,116 @@ type AsValueUnion MoreAsValueUnionValues (
 type record Rec {
   PosIntValues val optional,
   integer i (0..6-3),
-  octetstring os (‘1010’O, ‘1001’O, ‘1100’O) optional
+  octetstring os ('1010'O, '1001'O, '1100'O) optional
 }
 with {
-  variant(val) “JSON: name as posInt”;
-  variant(i) “JSON: name as int”;
+  variant(val) "JSON: name as posInt";
+  variant(i) "JSON: name as int";
 }
 
 type Rec RecValues (
-  { 1, 0, ‘1010’O },
-  { 5, 0, ‘1001’O },
+  { 1, 0, '1010'O },
+  { 5, 0, '1001'O },
   { 7, 2, omit },
   { omit, 1, omit }
 );
 
-// Schema segment generated for type “Rec”:
-// “Rec” : {
-//     “type” : “object”,
-//     “subType” : “record”,
-//     “properties” : {
-//         “posInt” : {
-//             “anyOf” : [
+// Schema segment generated for type "Rec":
+// "Rec" : {
+//     "type" : "object",
+//     "subType" : "record",
+//     "properties" : {
+//         "posInt" : {
+//             "anyOf" : [
 //                 {
-//                     “type” : “null”
+//                     "type" : "null"
 //                 },
-//                     “originalName” : “val”,
-//                     “#ref” : “#/definitions/MyModule/PosIntValues”
+//                     "originalName" : "val",
+//                     "#ref" : "#/definitions/MyModule/PosIntValues"
 //                 }
 //             ],
-//             “omitAsNull” : false
+//             "omitAsNull" : false
 //         },
-//         “int” : {
-//             “originalName” : “i”,
-//             “type” : “integer”,
-//             “minimum” : 0,
-//             “exclusiveMinimum” : false,
-//             “maximum” : 3,
-//             “exclusiveMaximum” : false
+//         "int" : {
+//             "originalName" : "i",
+//             "type" : "integer",
+//             "minimum" : 0,
+//             "exclusiveMinimum" : false,
+//             "maximum" : 3,
+//             "exclusiveMaximum" : false
 //         },
-//         “os” : {
-//             “anyOf” : [
+//         "os" : {
+//             "anyOf" : [
 //                 {
-//                     “type” : “null”,
+//                     "type" : "null",
 //                 },
 //                 {
-//                     “type” : “string”,
-//                     “subType” : “octetstring”,
-//                     “pattern” : “^([0-9A-Fa-f][0-9A-Fa-f])*$”,
-//                     “enum” : [
-//                         “1010”,
-//                         “1001”,
-//                         “1100”
+//                     "type" : "string",
+//                     "subType" : "octetstring",
+//                     "pattern" : "^([0-9A-Fa-f][0-9A-Fa-f])*$",
+//                     "enum" : [
+//                         "1010",
+//                         "1001",
+//                         "1100"
 //                     ]
 //                 }
 //             ],
-//             “omitAsNull” : false
+//             "omitAsNull" : false
 //         }
 //     },
-//     “additionalProperties” : false,
-//     “fieldOrder” : [
-//         “posInt”,
-//         “int”,
-//         “os”
+//     "additionalProperties" : false,
+//     "fieldOrder" : [
+//         "posInt",
+//         "int",
+//         "os"
 //     ],
-//     “required” : [
-//         “int”
+//     "required" : [
+//         "int"
 //     ]
 // }
 
-// Schema segment for type “RecValues”:
-// “RecValues” : {
-//     “allOf” : [
+// Schema segment for type "RecValues":
+// "RecValues" : {
+//     "allOf" : [
 //         {
-//             “$ref” : “#/definitions/MyModule/Rec”
+//             "$ref" : "#/definitions/MyModule/Rec"
 //         },
 //         {
-//             “enum” : [
+//             "enum" : [
 //                 {
-//                     “posInt” : 1,
-//                     “int” : 0,
-//                     “os” : “1010”
+//                     "posInt" : 1,
+//                     "int" : 0,
+//                     "os" : "1010"
 //                 },
 //                 {
-//                     “posInt” : 5,
-//                     “int” : 0,
-//                     “os” : “1001”
+//                     "posInt" : 5,
+//                     "int" : 0,
+//                     "os" : "1001"
 //                 },
 //                 {
-//                     “posInt” : 7,
-//                     “int” : 2
+//                     "posInt" : 7,
+//                     "int" : 2
 //                 },
 //                 {
-//                     “posInt” : 7,
-//                     “int” : 2,
-//                     “os” : null
+//                     "posInt" : 7,
+//                     "int" : 2,
+//                     "os" : null
 //                 },
 //                 {
-//                     “int” : 1,
+//                     "int" : 1,
 //                 },
 //                 {
-//                     “posInt” : null,
-//                     “int” : 1
+//                     "posInt" : null,
+//                     "int" : 1
 //                 },
 //                 {
-//                     “int” : 1,
-//                     “os” : null
+//                     "int" : 1,
+//                     "os" : null
 //                 },
 //                 {
-//                     “posInt” : null,
-//                     “int” : 1,
-//                     “os” : null
+//                     "posInt" : null,
+//                     "int" : 1,
+//                     "os" : null
 //                 }
 //             ]
 //         }
@@ -7274,7 +7279,7 @@ type Rec RecValues (
 
 The JSON encoder and decoder work according to the rules defined in the JSON part of the TTCN-3 standard <<13-references.adoc#_25, [25]>> with the following differences:
 
-* No wrapper JSON object is added around the JSON representation of the encoded value, i.e. all values are encoded as if they had the JSON variant attribute `noType` (from the standard). Similarly, the decoder expects the JSON document to only contain the value’s JSON representation (without the wrapper). If a wrapper object is desired, then the type in question should be placed in a `record`, `set` or `union`.
+* No wrapper JSON object is added around the JSON representation of the encoded value, i.e. all values are encoded as if they had the JSON variant attribute `noType` (from the standard). Similarly, the decoder expects the JSON document to only contain the value's JSON representation (without the wrapper). If a wrapper object is desired, then the type in question should be placed in a `record`, `set` or `union`.
 * The JSON encoder and decoder only accept the variant attributes listed <<top-level, here>>. Some of these have the same effect as variant attributes (with similar names) from the standard. The rest of the variant attributes from the standard are not supported. See <<external-functions, here>> regarding the variant attributes `normalize` and `errorbehavior` (from the standard).
 * The syntax of the JSON encode attribute is `encode JSON`. The attribute `encode JSON RFC7159` is not supported.
 * The decoder converts the JSON number `-0.0 `(in any form) to the TTCN-3 float `-0.0`, i.e. float values are decoded as if they had the JSON variant attribute `useMinus` (from the standard).The same is not true for integers, since there is no integer value `-0` in TITAN.
@@ -7342,7 +7347,7 @@ module X {
   // …
 }
 with {
-  extension “requiresTITAN R8C”;
+  extension "requiresTITAN R8C";
 }
 ----
 
@@ -7352,7 +7357,7 @@ Compiling this module with TITAN R8B or below may result in a different compiler
 
 ==== Specifying the Version of a TTCN-3 Module
 
-A module’s own version information can be specified in an extension attribute. The format of the extension attribute is "version <version data>" that is, the literal string "version" followed by the version information (R-state).
+A module's own version information can be specified in an extension attribute. The format of the extension attribute is "version <version data>" that is, the literal string "version" followed by the version information (R-state).
 
 Example:
 [source]
@@ -7361,13 +7366,13 @@ module supplier {
   // …
 }
 with {
-  extension “version R1A”;
+  extension "version R1A";
 }
 ----
 
 The version of the module should be set to match the R-state of the product it belongs to.
 
-For backward compatibility, the lack of version information (no extension attribute with "version" in the module’s "with" block) is equivalent to the highest possible version and satisfies any version requirement.
+For backward compatibility, the lack of version information (no extension attribute with "version" in the module's "with" block) is equivalent to the highest possible version and satisfies any version requirement.
 
 ==== Required Version of an Imported Module
 
@@ -7380,7 +7385,7 @@ module importer {
   import from supplier all;
 }
 with {
-  extension “requires supplier R2A”
+  extension "requires supplier R2A"
 }
 ----
 
@@ -7412,7 +7417,7 @@ A number of checks are performed during the build to ensure consistency of the T
 
 === Overview
 
-As a TTCN-3 language extension Titan can generate invalid messages for the purpose of negative testing. The purpose is to generate wrong messages that do not conform to a given type that the SUT is expecting, and send them to the SUT and observe the SUT’s reaction. In Titan only the encoding is implemented, the decoding of wrong messages is not in the scope of this feature.
+As a TTCN-3 language extension Titan can generate invalid messages for the purpose of negative testing. The purpose is to generate wrong messages that do not conform to a given type that the SUT is expecting, and send them to the SUT and observe the SUT's reaction. In Titan only the encoding is implemented, the decoding of wrong messages is not in the scope of this feature.
 
 In protocol testing the term of abstract syntax and transport syntax can be distinguished. In TTCN-3 abstract syntaxes are the data type definitions, while transport syntax is defined using with attributes (encode, variant) that are attached to type definitions. The negative testing feature defines modifications in the transport syntax, thus it does not affect TTCN-3 type definitions. This means that the content of the values, which shall be called *erroneous values* and *erroneous templates*, will not be modified; only their encoding will be. This encoding (transport syntax) is determined by the with attributes attached to the type definition, in case of negative testing the encoding of a value is modified by attaching special with attributes to the value which is to be encoded. TTCN-3 with attributes can be attached only to module level constants and templates; this is a limitation of the TTCN-3 standard.
 
@@ -7429,7 +7434,7 @@ The corresponding ASN.1 types can also be used when imported from an ASN.1 modul
 The following *erroneous* behaviors can be defined for the encoding of an *erroneous value* or *template*:
 
 * omit specified fields
-* change the specified field’s value or both type and value
+* change the specified field's value or both type and value
 * omit all fields before or after the specified field
 * insert a new field before or after the specified field
 
@@ -7450,7 +7455,7 @@ ErroneousKeywordErroneousKeyword ::= "erroneous"
 For an erroneous attribute the syntax of the AttribSpec, a free text within double quotes, is as follows:
 
 [source]
-AttribSpecForErroneous := IndicatorKeyword [ “(“ RawKeyword ")" ] ":=" TemplateInstance [ AllKeyword ]
+AttribSpecForErroneous := IndicatorKeyword [ "(" RawKeyword ")" ] ":=" TemplateInstance [ AllKeyword ]
 
 [source]
 IndicatorKeyword := "before" | "value" | "after"
@@ -7467,8 +7472,8 @@ type record MyRec {
 }
 const MyRec c_myrec := {i:=1,b:=true}
 with {
-  erroneous (i) “before := 123”
-  erroneous (b) “value := omit”
+  erroneous (i) "before := 123"
+  erroneous (b) "value := omit"
 }
 ----
 
@@ -7481,8 +7486,8 @@ For example:
 ----
 template MyRec t_myrec := {i:=2,b:=false}
 with {
-  erroneous (i) “after := MyRec.i:123”
-  erroneous (i) “before := MyInteger:123”
+  erroneous (i) "after := MyRec.i:123"
+  erroneous (i) "before := MyInteger:123"
 }
 ----
 
@@ -7502,8 +7507,8 @@ Both references to constant values and literal values can be used:
 const MyRec c_myrec := {i:=3,b:=true}
 template MyRec t_myrec := {i:=2,b:=false}
 with {
-  erroneous (i) “after := c_myrec” // type determined by the definition of c_myrec
-  erroneous (i) “before := MyRec: {i:=4,b:=true}” // type must be specified
+  erroneous (i) "after := c_myrec" // type determined by the definition of c_myrec
+  erroneous (i) "before := MyRec: {i:=4,b:=true}" // type must be specified
 }
 ----
 One or more field qualifiers must be used in the AttribQualifier part. If more than one field is specified, then the erroneous behavior will be attached to all specified fields, for example:
@@ -7521,11 +7526,11 @@ MyUnion ::= CHOICE { sof MySeqOf }
 MySeqOf ::= SEQUENCE OF MySeq
 MySeq ::= SEQUENCE { i INTEGER }
 const MyUnion c_myunion := { … }
-with { erroneous (sof[5].i) “value := 3.14” }
+with { erroneous (sof[5].i) "value := 3.14" }
 This also works in case of recursive types:
 type record MyRRec { MyRRec r optional }
 const MyRRec c_myrrec := { … }
-with { erroneous (r.r.r.r.r) “value := omit” }
+with { erroneous (r.r.r.r.r) "value := omit" }
 ----
 
 If the erroneous value does not contain a field which was referred by the erroneous qualifier then the erroneous behavior specified for that field will have no effect. For example:
@@ -7535,8 +7540,8 @@ If the erroneous value does not contain a field which was referred by the errone
 type union MyUni { integer i, boolean b }
 const MyUni c_myuni := { i:=11}
 with {
-  erroneous (i) “value := MyUni.i:22”
-  erroneous (b) “value := MyUni.b:false” // this rule has no effect
+  erroneous (i) "value := MyUni.i:22"
+  erroneous (b) "value := MyUni.b:false" // this rule has no effect
 }
 ----
 
@@ -7571,7 +7576,7 @@ control {
 }
 ----
 
-Erroneous constants can be assigned to fields of other erroneous constants and templates, however if the original field or any field embedded in that field was made erroneous then the top level erroneous data will be used and the referenced constant’s erroneous data ignored. Erroneous data can be visualized as a tree that is a sub-tree of the tree of a type (in the examples the R type, which is recursive). If two erroneous sub-trees overlap then the one which was attached to the constant used as the value of that field where the overlapping happens will be ignored.
+Erroneous constants can be assigned to fields of other erroneous constants and templates, however if the original field or any field embedded in that field was made erroneous then the top level erroneous data will be used and the referenced constant's erroneous data ignored. Erroneous data can be visualized as a tree that is a sub-tree of the tree of a type (in the examples the R type, which is recursive). If two erroneous sub-trees overlap then the one which was attached to the constant used as the value of that field where the overlapping happens will be ignored.
 
 Example:
 [source]
@@ -7611,7 +7616,7 @@ Meaning of IndicatorKeyword:
 
 In case of unions only the "value" keyword can be used.
 
-The optional "raw" keyword that can follow the IndicatorKeyword should be used when raw binary data has to be inserted instead of a value. The specified binary data will be inserted into the encoder’s output stream at the specified position. The specified data will not be checked in any way for correctness. For convenience this binary data can be specified using TTCN-3 constants as containers. For different encoding types the different containers are as follows:
+The optional "raw" keyword that can follow the IndicatorKeyword should be used when raw binary data has to be inserted instead of a value. The specified binary data will be inserted into the encoder's output stream at the specified position. The specified data will not be checked in any way for correctness. For convenience this binary data can be specified using TTCN-3 constants as containers. For different encoding types the different containers are as follows:
 
 [cols=",,,,,,,",options="header",]
 |===
@@ -7626,18 +7631,18 @@ Bitstrings can be used for encoding types that support the insertion of not only
 
 [source]
 ----
-erroneous (i) "after(raw) := ‘0’B"
+erroneous (i) "after(raw) := '0'B"
 replace a field with bits 101:
-erroneous (b) "value(raw) := ‘101’B"
+erroneous (b) "value(raw) := '101'B"
 ----
 
 Charstring types can be used in case of text based encodings. For example insert some XML string between two fields:
 [source]
 ----
-erroneous (i) "after(raw) := ""<ERROR>erroneous element</ERROR>"””
+erroneous (i) "after(raw) := ""<ERROR>erroneous element</ERROR>"""
 ----
 
-Notice that the double quotes surrounding the charstring must be doubled because it’s inside another string.
+Notice that the double quotes surrounding the charstring must be doubled because it's inside another string.
 
 The optional "all" keyword after the TemplateInstance must be used when omitting all fields before or after a specified field, in all other cases it must not be used.
 
@@ -7651,9 +7656,9 @@ type record MyRec {
   boolean b,
   charstring s length (3),
   MyRec r optional
-} with { encode “RAW” variant “ ….. “ }
+} with { encode "RAW" variant " ….. " }
 type record of integer MyRecOf;
-type MyRec.i MyInteger with { encode “RAW” variant “ ….. “ }
+type MyRec.i MyInteger with { encode "RAW" variant " ….. " }
 ----
 
 ==== Discard Mandatory Fields
@@ -7665,29 +7670,29 @@ type record of integer IntList;
 var IntList vl_myList := { 1, 2, 3 };
 var IntList vl_emptyList := {};
 replace(vl_myList, 1, 2, vl_emptyList); // returns { 1 }
-replace(“abcdef”, 2, 1, “”); // returns “abdef”
-replace(‘12FFF’H, 3, 2, ‘’H); // returns ‘12F’H
+replace("abcdef", 2, 1, ""); // returns "abdef"
+replace('12FFF'H, 3, 2, ''H); // returns '12F'H
 ----
 
 ==== Insert New Fields
 
 [source]
 ----
-const MyRec c_myrec3 := { i:=1, b:=true, s:=”str”, r:=omit }
+const MyRec c_myrec3 := { i:=1, b:=true, s:="str", r:=omit }
 with {
-  erroneous (i) “before := MyRec.i:3” // has same type as field i
-  erroneous (b) “after := MyInteger:4”
+  erroneous (i) "before := MyRec.i:3" // has same type as field i
+  erroneous (b) "after := MyInteger:4"
 }
 const MyRecOf c_myrecof2 := { 1, 2, 3 }
-with { erroneous ([1]) “after := MyRecOf[-]:99” }
+with { erroneous ([1]) "after := MyRecOf[-]:99" }
 ----
 
 ==== Ignore Subtype Restrictions
 
 [source]
 ----
-const MyRec c_myrec4 := { i:=1, b:=true, s:=”str”, r:=omit }
-with { erroneous (s) “value :=””too long string””” }
+const MyRec c_myrec4 := { i:=1, b:=true, s:="str", r:=omit }
+with { erroneous (s) "value :=""too long string""" }
 ----
 
 ==== Change the Encoding of a Field
@@ -7695,8 +7700,8 @@ with { erroneous (s) “value :=””too long string””” }
 Here the TTCN-3 root type and value of field i are not changed but the encoding is changed:
 [source]
 ----
-const MyRec c_myrec5 := { i:=1, b:=true, s:=”str”, r:=omit }
-with { erroneous (i) “value := MyInteger:1” }
+const MyRec c_myrec5 := { i:=1, b:=true, s:="str", r:=omit }
+with { erroneous (i) "value := MyInteger:1" }
 ----
 
 ==== Completely Change a Field to a Different Type and Value
@@ -7704,8 +7709,8 @@ with { erroneous (i) “value := MyInteger:1” }
 The second field is changed from a boolean to an integer:
 [source]
 ----
-const MyRec c_myrec6 := { i:=1, b:=true, s:=”str”, r:=omit }
-with { erroneous (b) “value := MyInteger:1” }
+const MyRec c_myrec6 := { i:=1, b:=true, s:="str", r:=omit }
+with { erroneous (b) "value := MyInteger:1" }
 ----
 
 === Summary
@@ -7735,7 +7740,7 @@ For example, encoding the following value:
 [source]
 ----
 type record R { integer i }
-const R c_r := { 42 } with { erroneous (i) “value := \“fourty-two\” ” }
+const R c_r := { 42 } with { erroneous (i) "value := \"fourty-two\" " }
 ----
 
 will result in the following XML:
@@ -7753,7 +7758,7 @@ To generate an XML element with a given name, e.g. "s", the following code can b
 ----
 type record R { integer i }
 type charstring s; // a type alias
-const R c_r := { 42 } with { erroneous (i) “value := s : \“fourty-two\” ” }
+const R c_r := { 42 } with { erroneous (i) "value := s : \"fourty-two\" " }
 ----
 
 The resulting XML will be (erroneous values highlighted in yellow):
@@ -7775,11 +7780,11 @@ type record R2 {
   charstring at,
   charstring el
 }
-with { variant (at) “attribute” }
+with { variant (at) "attribute" }
 
 const R2 c_r2 := {
-  at := “tack”, el := “le”
-} with { erroneous (at) “before := 13 ” }
+  at := "tack", el := "le"
+} with { erroneous (at) "before := 13 " }
 results in:
 
 <R2[yellow-background]##<INTEGER>13</INTEGER>## at='tack'>
@@ -7792,11 +7797,11 @@ To ensure the erroneous value is encoded as an XML attribute, a TTCN-3 type alia
 [source,subs="+quotes"]
 ----
 // type record R2 as above
-type integer newatt with { variant “attribute” } // type alias for integer
+type integer newatt with { variant "attribute" } // type alias for integer
 
 const R2 c_r2a := {
-  at := “tack”, el := “le”
-} with { erroneous (at) “before := newatt : 13 ” }
+  at := "tack", el := "le"
+} with { erroneous (at) "before := newatt : 13 " }
 
 <R2 [yellow-background]##newatt='13'## at='tack'>
   <el>le</el>
@@ -7809,8 +7814,8 @@ const R2 c_r2a := {
 ----
 // type record R2 as above
 const R2 c_r2r := {
-  at := “tack”, el := “le”
-} with { erroneous (at) “before(raw) := ""ax='xx'"" ” } // not compensated
+  at := "tack", el := "le"
+} with { erroneous (at) "before(raw) := ""ax='xx'"" " } // not compensated
 
 <R2[yellow-background]##ax='xx'## at='tack'>
   <el>le</el>
@@ -7823,8 +7828,8 @@ The resulting XML is not well formed.
 ----
 // type record R2 as above
 const R2 c_r2r := {
-  at := “tack”, el := “le”
-} with { erroneous (at) “before(raw) := "" ax='xx'"" ” }
+  at := "tack", el := "le"
+} with { erroneous (at) "before(raw) := "" ax='xx'"" " }
 // compensated, note space here-----------^
 
 <R2 [yellow-background]##ax='xx'# at='tack'>
@@ -7897,7 +7902,7 @@ type record extbitrec {
                     } with { variant "EXTENSION_BIT(yes)" encode "RAW" }
                     const extbitrec cr := { 1, 2 } with
                     { erroneous(f1) "before := 1" erroneous(f2) "value := 1" }
-                    // The result will be ‘010181’O.
+                    // The result will be '010181'O.
 ----
 . E`XTENSION_BIT_GROUP(<param1, param2, param3>)`
 +
@@ -7922,11 +7927,11 @@ const extbitgrouprec cr := { 1, 2, 3, 4, 5, 6 } with {
   erroneous(f4) "value := 1"
   erroneous(f6) "after := 1" }
 // None of the extension bit groups are affected.
-// The result will be ‘0101028301058601’O.
+// The result will be '0101028301058601'O.
 ----
 . `LENGTHTO(<param>)` and `LENGTHINDEX(<param>)`
 +
-If any of the fields the length is calculated from or the field the result is stored into are affected by negative testing attributes a warning will be given and the length calculation will be ignored. In this case the value of the field the result is stored into is undefined, but it’s possible to set its value using negative testing attributes.
+If any of the fields the length is calculated from or the field the result is stored into are affected by negative testing attributes a warning will be given and the length calculation will be ignored. In this case the value of the field the result is stored into is undefined, but it's possible to set its value using negative testing attributes.
 +
 [source]
 ----
@@ -7946,11 +7951,11 @@ type record lengthtorec1 {
                     const lengthtorec2 cr := { 1, { 2 }, "", "one" } with {
                       erroneous(f1) "before := 1" erroneous(f2) "after := 1" }
                     // No conflict, LENGTHTO is calculated normally.
-                    // The result will be ‘010103016F6E65’O.
+                    // The result will be '010103016F6E65'O.
 ----
 . `POINTERTO(<param>)`
 +
-If any of the fields between (and including) the pointer and the pointed fields are affected by negative testing attributes (e.g. new fields were added in-between) a warning will be given and the pointer calculation will be ignored. In this case the value of the pointer field will be undefined, but it’s possible to set its value using negative testing attributes.
+If any of the fields between (and including) the pointer and the pointed fields are affected by negative testing attributes (e.g. new fields were added in-between) a warning will be given and the pointer calculation will be ignored. In this case the value of the pointer field will be undefined, but it's possible to set its value using negative testing attributes.
 +
 [source]
 ----
@@ -7962,7 +7967,7 @@ type record pointertorec {
                     const pointertorec cr := { 1, "dinner", "bell" } with {
                       erroneous(f1) "before := 1" erroneous(f3) "after := 1" }
                     // No conflict, POINTERTO is calculated normally.
-                    // The result will be ‘010264696E6E657262656C6C01’O.
+                    // The result will be '010264696E6E657262656C6C01'O.
 ----
 . `PRESENCE(<param>)`
 Even if the optional field or any of the fields referenced in the presence indicator list are affected by negative testing attributes a warning will be given and the fields will not be modified according to the PRESENCE RAW encoding attribute, it will be completely ignored.
@@ -7982,7 +7987,7 @@ type record presencerec1 {
                     const presencerec2 cr := { 1, { 2 }, 3, 4 } with {
                       erroneous(f1) "after := 1" erroneous(f4) "after := 1" }
                     // No conflict.
-                    // The result will be ‘090102030401’O.
+                    // The result will be '090102030401'O.
 ----
 . `TAG(<param>)` and `CROSSTAG(<param>)`
 +
@@ -8002,7 +8007,7 @@ type record tagrec1 {
                       erroneous(f1) "after := 1" erroneous(f2) "after := 1"
                       erroneous(f3) "value := 33" }
                     // No conflict.
-                    // The result will be ‘0101020121’O.
+                    // The result will be '0101020121'O.
                     type record crosstagrec1 {
                       integer f1,
                       integer f2
@@ -8022,7 +8027,7 @@ type record tagrec1 {
                       erroneous(f1) "before := 1" erroneous(f2) "after := 1"
                       erroneous(f3) "after := 9" }
                     // No conflict.
-                    // The result will be ‘01010201030409’O.
+                    // The result will be '01010201030409'O.
 ----
 
 === Special Cosiderations for JSON Encoding
@@ -8033,7 +8038,7 @@ There are a number of particularities related to the negative testing of the JSO
 +
 Replaced values in JSON objects (fields of records, sets and unions) keep their field name, even if the replaced value is of a different type.
 +
-Inserted values (in records, sets and unions) receive a field name derived from the name of the value’s type. For built-in types (e.g. integer, boolean, universal charstring) the XML name will be according to Table 4 at the end of clause 11.25 in X.680 (<<13-references.adoc#_6, [6]>>), usually the uppercased name of the type (e.g. INTEGER, BOOLEAN, UNIVERSAL_CHARSTRING). For custom types the field name will start with an '@' character, followed by the name of the module the type was defined in, and the name of the type separated by a dot ('.').
+Inserted values (in records, sets and unions) receive a field name derived from the name of the value's type. For built-in types (e.g. integer, boolean, universal charstring) the XML name will be according to Table 4 at the end of clause 11.25 in X.680 (<<13-references.adoc#_6, [6]>>), usually the uppercased name of the type (e.g. INTEGER, BOOLEAN, UNIVERSAL_CHARSTRING). For custom types the field name will start with an '@' character, followed by the name of the module the type was defined in, and the name of the type separated by a dot ('.').
 +
 Example:
 +
@@ -8045,16 +8050,16 @@ type record R {
   charstring cs
 }
 type boolean B;
-const R c_r := { 3, “a” } with {
-  erroneous(i) “before := \“before\””;
-  erroneous(i) “value := \“value\””;
-  erroneous(cs) “before := B:true”;
-  erroneous(cs) “after := R.i:10”;
+const R c_r := { 3, "a" } with {
+  erroneous(i) "before := \"before\"";
+  erroneous(i) "value := \"value\"";
+  erroneous(cs) "before := B:true";
+  erroneous(cs) "after := R.i:10";
 }
 ...
 }
 // JSON encoding (erroneous values highlighted in yellow):
-// [yellow-background]#{“charstring”:“before”#,“i”:[yellow-background]##“value”##,[yellow-background]#“@M.B”:true#,“cs”:“a”,[yellow-background]#“@M.R.i”:10#}
+// [yellow-background]#{"charstring":"before"#,"i":[yellow-background]##"value"##,[yellow-background]#"@M.B":true#,"cs":"a",[yellow-background]#"@M.R.i":10#}
 ----
 
 * *Raw values*
@@ -8070,16 +8075,16 @@ type record R {
   charstring cs
 }
 type record of integer L;
-const R c_r1 := { 1, “a” } with { erroneous(i) “before(raw) := \”abc\”” };
-const R c_r2 := { 1, “a” } with { erroneous(i) “value(raw) := \”abc\”” };
-const R c_r3 := { 1, “a” } with { erroneous(i) “after(raw) := \”abc\”” };
-const L c_l1 := { 1, 2, 3 } with { erroneous([1]) “before(raw) := \”x\”” };
-const L c_l2 := { 1, 2, 3 } with { erroneous([1]) “value(raw) := \”x\”” };
-const L c_l3 := { 1, 2, 3 } with { erroneous([1]) “after(raw) := \”x\”” };
+const R c_r1 := { 1, "a" } with { erroneous(i) "before(raw) := \"abc\"" };
+const R c_r2 := { 1, "a" } with { erroneous(i) "value(raw) := \"abc\"" };
+const R c_r3 := { 1, "a" } with { erroneous(i) "after(raw) := \"abc\"" };
+const L c_l1 := { 1, 2, 3 } with { erroneous([1]) "before(raw) := \"x\"" };
+const L c_l2 := { 1, 2, 3 } with { erroneous([1]) "value(raw) := \"x\"" };
+const L c_l3 := { 1, 2, 3 } with { erroneous([1]) "after(raw) := \"x\"" };
 // JSON encodings (erroneous values highlighted in yellow):
-// c_r1: {[yellow-background]#abc#“i”:1,“cs”:“a”}
-// c_r2: {[yellow-background]#abc#“cs”:“a”}
-// c_r3: {“i”:1[yellow-background]##abc##,“cs”:“a”}
+// c_r1: {[yellow-background]#abc#"i":1,"cs":"a"}
+// c_r2: {[yellow-background]#abc#"cs":"a"}
+// c_r3: {"i":1[yellow-background]##abc##,"cs":"a"}
 // c_l1: [1##x##,2,3]
 // c_l2: [1##x##,3]
 // c_l3: [1,2##x##,3]
@@ -8087,7 +8092,7 @@ const L c_l3 := { 1, 2, 3 } with { erroneous([1]) “after(raw) := \”x\””
 
 * *Unsupported types*
 +
-Although the JSON encoder supports anytypes and arrays, these cannot have erroneous attributes, thus the JSON encoder’s negative testing feature is disabled for these types.
+Although the JSON encoder supports anytypes and arrays, these cannot have erroneous attributes, thus the JSON encoder's negative testing feature is disabled for these types.
 
 === Updating erroneous attributes
 
@@ -8096,14 +8101,14 @@ The erroneous attributes of values and templates can be changed dynamically, usi
 Its syntax is:
 [source]
 ----
-UpdateStatement ::= UpdateKeyword “(“ ExtendedIdentifier ")" [ WithKeyword WithAttribList ]
+UpdateStatement ::= UpdateKeyword "(" ExtendedIdentifier ")" [ WithKeyword WithAttribList ]
 
 UpdateKeyword ::= "@update"
 ----
 
 The `@update` statement can be used in functions, altsteps, testcases and control parts. Per the BNF productions in the TTCN-3 standard, the `UpdateStatement' defined here would be in the FunctionStatement and ControlStatement productions.
 
-The `@update` statement replaces the erroneous attributes of the value or template referenced by ExtendedIdentifier with the erroneous attributes specified in WithAttribList. The statement overwrites any erroneous attributes the value or template may have had before. If the `with' attributes are omitted, then the statement removes all the value’s or template’s erroneous attributes.
+The `@update` statement replaces the erroneous attributes of the value or template referenced by ExtendedIdentifier with the erroneous attributes specified in WithAttribList. The statement overwrites any erroneous attributes the value or template may have had before. If the `with' attributes are omitted, then the statement removes all the value's or template's erroneous attributes.
 
 Example:
 [source]
@@ -8113,22 +8118,22 @@ type record MyRec {
   boolean b
 }
 with {
-  encode “JSON”
+  encode "JSON"
 }
 const MyRec c_myrec := { i:=1, b:=true }
 with {
-  erroneous (i) “before := 123”
-  erroneous (b) “value := omit”
+  erroneous (i) "before := 123"
+  erroneous (b) "value := omit"
 }
 function func() {
-  log(encvalue(c_myrec)); // output: {“INTEGER”:123,“i”:1}
+  log(encvalue(c_myrec)); // output: {"INTEGER":123,"i":1}
 
-  @update(c_myrec) with { erroneous(i) “value := 3.5” }
-  log(encvalue(c_myrec)); // output: {“i”:3.500000,“b”:true}
+  @update(c_myrec) with { erroneous(i) "value := 3.5" }
+  log(encvalue(c_myrec)); // output: {"i":3.500000,"b":true}
   // the erroneous attributes set for c_myrec.b at definition have been
   // overwritten by the @update statement
   @update(c_myrec);
-  log(encvalue(c_myrec)); // output: {“i”:1,“b”:true} // no longer erroneous
+  log(encvalue(c_myrec)); // output: {"i":1,"b":true} // no longer erroneous
 }
 ----
 
@@ -8143,10 +8148,10 @@ function f_sqr(integer p) return integer {
 function func2() {
   var integer x := 7;
   @update(c_myrec) with {
-    erroneous(i) “value := x + 6”;
-    erroneous(b) “value := int2str(1 + f_sqr(x – 3)) & \“x\” ”;
+    erroneous(i) "value := x + 6";
+    erroneous(b) "value := int2str(1 + f_sqr(x – 3)) & \"x\" ";
   }
-  log(encvalue(c_myrec)); // output: {“i”:13,“b”:“17x”}
+  log(encvalue(c_myrec)); // output: {"i":13,"b":"17x"}
 }
 ----
 
@@ -8166,11 +8171,11 @@ type component MyComp {
 function func3() runs on MyComp {
   myCompVar := 10;
   @update(c_myrec) with {
-    erroneous(i) “value := myCompVar”
+    erroneous(i) "value := myCompVar"
   } // the erroneous value of c_myrec.i is calculated here
 
   myCompVar := 3;
-  log(encvalue(c_myrec)); // output: {“i”:10,“b”:true}
+  log(encvalue(c_myrec)); // output: {"i":10,"b":true}
   // even though the component variable has changed, the encoder is using the
   // old value stored at the @update statement
 }
@@ -8182,12 +8187,12 @@ The testcase stop operation defines a user defined immediate termination of a te
 
 Syntax:
 [source]
-testcase "." stop [ “(“ { ( FreeText | TemplateInstance ) [ ","] } ")" ]
+testcase "." stop [ "(" { ( FreeText | TemplateInstance ) [ ","] } ")" ]
 
 Example:
 [source]
 ----
-testcase.stop(“Unexpected Termination”);
+testcase.stop("Unexpected Termination");
 // The testcase stops with a Dynamic Testcase Error and the parameter  string is
 // written to the log.
 ----
@@ -8246,7 +8251,7 @@ testcase TC(in integer i) runs on MyComp {
   }
   @catch(err) {
     if (match(err, pattern "*division by zero*")) {
-      log(“division by zero detected”);
+      log("division by zero detected");
       setverdict(fail); // the verdict is fail instead of error
     } else {
       throw(err); // external function used to re-throw the DTE
@@ -8270,7 +8275,7 @@ testcase TC() runs on MyComp {
       }
       @catch(dte_str) {
         if (match(err, <some pattern for minor errors>) {
-          log(“minor error “, dte_str, “ ignored, continuing load test…”);
+          log("minor error ", dte_str, " ignored, continuing load test…");
         } else {
           throw(dte_str);
         }
@@ -8279,7 +8284,7 @@ testcase TC() runs on MyComp {
     setverdict(pass);
   }
   @catch(dte_msg) {
-    log(“Something went very wrong: “, dte_msg);
+    log("Something went very wrong: ", dte_msg);
     setverdict(fail);
   }
 }
@@ -8287,7 +8292,7 @@ testcase TC() runs on MyComp {
 
 == Lazy Parameter Evaluation
 
-This feature was developed for load testing, to speed up function execution by not evaluating the actual parameter of the function when its value is not used inside the function. It speeds up execution when relatively large expressions are used as "in" actual parameters. In the normal case the parameter is always evaluated before the execution of the function starts, even if the parameter is never used. In case of lazy parametrization the parameter is not evaluated if it’s never used and it is evaluated exactly once when it is used. It is important to note that the values referenced by the expression may change before the evaluation actually happens. This feature can be used only in case of "in" parameters, in case of "inout" and "out" parameters expressions cannot be used and thus it would be useless. The new titan specific keyword _@lazy_ was introduced, this must be placed right before the type in the formal parameter. This can be used for both values and templates of all types.
+This feature was developed for load testing, to speed up function execution by not evaluating the actual parameter of the function when its value is not used inside the function. It speeds up execution when relatively large expressions are used as "in" actual parameters. In the normal case the parameter is always evaluated before the execution of the function starts, even if the parameter is never used. In case of lazy parametrization the parameter is not evaluated if it's never used and it is evaluated exactly once when it is used. It is important to note that the values referenced by the expression may change before the evaluation actually happens. This feature can be used only in case of "in" parameters, in case of "inout" and "out" parameters expressions cannot be used and thus it would be useless. The new titan specific keyword _@lazy_ was introduced, this must be placed right before the type in the formal parameter. This can be used for both values and templates of all types.
 
 An example logging function that does not evaluate its message parameter if not used:
 [source]
@@ -8303,7 +8308,7 @@ calling the function with an expression:
 
 [source]
 ----
-MyLog( “MyLog: ” & log2str(some_large_data_structure) );
+MyLog( "MyLog: " & log2str(some_large_data_structure) );
 ----
 
 If `logEnabled` is false the above actual parameter will not be evaluated. Example for evaluation:
@@ -8334,7 +8339,7 @@ Currently the only limitation is that function reference types cannot have lazy
 
 == Differences between the Load Test Runtime and the Function Test Runtime
 
-The Function Test runtime sometimes provides extra features that the default Load Test runtime doesn’t (due to it being optimized for performance). One of these features, negative testing for encoders, was already discussed <<build-consistency-checks, here>>.
+The Function Test runtime sometimes provides extra features that the default Load Test runtime doesn't (due to it being optimized for performance). One of these features, negative testing for encoders, was already discussed <<build-consistency-checks, here>>.
 
 === Referencing record of elements through function parameters
 
@@ -8358,11 +8363,11 @@ f_param_ref(v_roi, v_roi[3]);
 
 ----
 
-This also works if the `record of` or its element(s) are embedded into other structures, and these structures are passed as the function’s parameters. It also works if the `record of` is an `optional` field of a `record` or `set`, and the field is set to `omit` inside the function.
+This also works if the `record of` or its element(s) are embedded into other structures, and these structures are passed as the function's parameters. It also works if the `record of` is an `optional` field of a `record` or `set`, and the field is set to `omit` inside the function.
 
 This functionality does not work for templates.
 
-WARNING: a side effect of this feature is that, in the Function Test runtime, passing an element outside of the record of’s bounds as an `out` or `inout` parameter does not extend the record of if the function doesn’t change the parameter, instead the size of the `record of` will remain unchanged. In the Load Test runtime this would change the size of the `record of` to the value of the index, and the `record of` would contain unbound elements at its end.
+WARNING: a side effect of this feature is that, in the Function Test runtime, passing an element outside of the record of's bounds as an `out` or `inout` parameter does not extend the record of if the function doesn't change the parameter, instead the size of the `record of` will remain unchanged. In the Load Test runtime this would change the size of the `record of` to the value of the index, and the `record of` would contain unbound elements at its end.
 
 Example (filling an array up by passing the element after the last as a function parameter):
 [source]
@@ -8486,11 +8491,11 @@ MyPort.receive(MyType:?) -> value v_myVar; // works in both runtimes,
 
 MyPort.receive(MyType:?) -> value (v_myVar, v_myHeaderVar := header)
 // only works in the Function Test runtime, the record is stored in v_myVar
-// and its field ‘header’ is stored in v_myHeaderVar;
+// and its field 'header' is stored in v_myHeaderVar;
 // causes a compilation error in the Load Test runtime
 
 MyPort.receive(MyType:?) -> value (v_myVar2 := @decoded payload)
-// only works in the Function Test runtime, the field ‘payload’ from the
+// only works in the Function Test runtime, the field 'payload' from the
 // received value is decoded (into a value of type MyType2, with the encoding
 // type set for MyType2) and stored in v_myVar2;
 // causes a compilation error in the Load Test runtime
@@ -8556,7 +8561,7 @@ ls -1 *.ttcn > prof_files.txt
 ttcn3_compiler -L -z prof_files.txt *.ttcn
 ----
 
-Once activated the profiler’s behavior can be customized through the [`PROFILER`] section in the configuration file (for more details see <<7-the_run-time_configuration_file.adoc#profiler, here>>).
+Once activated the profiler's behavior can be customized through the [`PROFILER`] section in the configuration file (for more details see <<7-the_run-time_configuration_file.adoc#profiler, here>>).
 
 === Gathered information
 
@@ -8564,9 +8569,9 @@ The profiler measures two things: the total execution time of a code line or fun
 
 The profiler measures all times with microsecond precision.
 
-The profiler classifies the following TTCN-3 elements as functions: functions, testcases, altsteps, parameterized templates and the control part (in this case the function’s name is `control`). External functions are not considered `functions' by the profiler, they are treated as regular TTCN-3 commands.
+The profiler classifies the following TTCN-3 elements as functions: functions, testcases, altsteps, parameterized templates and the control part (in this case the function's name is `control`). External functions are not considered `functions' by the profiler, they are treated as regular TTCN-3 commands.
 
-The `code lines' contain any line with at least one TTCN-3 command. The first line of a function is always a code line (even if it doesn’t contain any commands), and measures the time between the beginning of the function’s execution and the beginning of the execution of the first line in the function. The time between the end of the last line’s execution and the end of the function’s execution is not measured separately; it is simply added to the last line’s total time.
+The `code lines' contain any line with at least one TTCN-3 command. The first line of a function is always a code line (even if it doesn't contain any commands), and measures the time between the beginning of the function's execution and the beginning of the execution of the first line in the function. The time between the end of the last line's execution and the end of the function's execution is not measured separately; it is simply added to the last line's total time.
 
 In the following example there are 10 code lines and 3 functions:
 [source]
@@ -8574,18 +8579,18 @@ In the following example there are 10 code lines and 3 functions:
 module prof1 {
 type component C {}
 const integer c1 := 7;   // line 5
-function f1(inout integer x) runs on C   // line 7, function ‘f1’
+function f1(inout integer x) runs on C   // line 7, function 'f1'
 {
   x := x + c1;   // line 9
 }
-testcase tc1() runs on C   // line 12, function ‘tc1’
+testcase tc1() runs on C   // line 12, function 'tc1'
 {
   var integer x := 6;   // line 14
   f1(x);   // line 15
   log(x);   // line 16
   x := x + 1;   // line 17
 }
-control {   // line 20, function ‘prof1’
+control {   // line 20, function 'prof1'
   execute(tc1());   // line 21
 }
 }
@@ -8593,11 +8598,11 @@ control {   // line 20, function ‘prof1’
 
 ==== Gross and net times
 
-The line times measured by the profiler are gross times, meaning they also contain the execution times of all function calls in that line, not just the execution of the line itself. A setting in the configuration file can change the profiler to measure net line times instead, in which case the execution times of function calls will not be added to lines’ total execution times.
+The line times measured by the profiler are gross times, meaning they also contain the execution times of all function calls in that line, not just the execution of the line itself. A setting in the configuration file can change the profiler to measure net line times instead, in which case the execution times of function calls will not be added to lines' total execution times.
 
-The same is true for functions: by default the execution times of function calls inside the function are included in the function’s total time. A configuration file setting can change the profiler to measure net function times, in which case the function’s total time will not contain the times of embedded function calls.
+The same is true for functions: by default the execution times of function calls inside the function are included in the function's total time. A configuration file setting can change the profiler to measure net function times, in which case the function's total time will not contain the times of embedded function calls.
 
-If a function is defined in a module where profiling is not activated (does not appear in the compiler option’s file list), and is called from a module where profiling is activated, then that function call acts as if it were a regular code line (its execution time will be added the caller line’s and function’s total time in both cases).
+If a function is defined in a module where profiling is not activated (does not appear in the compiler option's file list), and is called from a module where profiling is activated, then that function call acts as if it were a regular code line (its execution time will be added the caller line's and function's total time in both cases).
 
 === Contents of the statistics file
 
@@ -8654,7 +8659,7 @@ Total:		36 lines,	10 functions
 * Sorted execution counts (4 lists) – same as the lists of total times
 * Sorted average times (4 lists) – same as the previous two
 * Top 10 total times, execution counts and average times (6 lists) – these contain the first 10 entries from every sorted global list (in the order mentioned in the previous three)
-* Unused code lines and functions (2 lists) – these don’t contain any data, only the code line / function specification, they are grouped by module, first the unused lines, then the functions
+* Unused code lines and functions (2 lists) – these don't contain any data, only the code line / function specification, they are grouped by module, first the unused lines, then the functions
 
 Any of these lists can be disabled in the configuration file (either one by one or grouped).
 
@@ -8664,15 +8669,15 @@ The profiler can be stopped using the `@profiler.stop` command. When stopped the
 
 A stopped profiler can be restarted with the `@profiler.start` command. Similarly, this has no effect if the profiler is already running.
 
-The execution count of a function is measured at the start of the function, thus if the profiler is stopped when the function is called, its call will not be measured, even if the profiler is restarted during the function’s execution.
+The execution count of a function is measured at the start of the function, thus if the profiler is stopped when the function is called, its call will not be measured, even if the profiler is restarted during the function's execution.
 
 Starting and stopping the profiler only affects profiling and code coverage in the current component.
 
-The boolean value `@profiler.running` stores the state of the profiler in the current component (`true` if it’s running or `false` if it’s stopped).
+The boolean value `@profiler.running` stores the state of the profiler in the current component (`true` if it's running or `false` if it's stopped).
 
 By default the profiler is automatically started for each component when the program starts, this can be changed in the configuration file.
 
-Usage example (a function that’s always profiled and returns the profiler to its original state):
+Usage example (a function that's always profiled and returns the profiler to its original state):
 [source]
 ----
 function f1(inout integer x) runs on C
@@ -8690,13 +8695,13 @@ function f1(inout integer x) runs on C
 
 In parallel mode a separate instance of the TTCN-3 Profiler runs on each process (each component, including the MTC, and each HC). These each generate a database file.
 
-The PTCs’ and the MTC’s profilers generate temporary database files (their names are created by appending a dot and either `mtc` or the PTC’s component reference to the original database file name).
+The PTCs' and the MTC's profilers generate temporary database files (their names are created by appending a dot and either `mtc` or the PTC's component reference to the original database file name).
 
-The Host Controller’s profiler merges the database files produced by its child processes with its own (and with the data gathered on previous runs, if data aggregation is set) and prints the final database file. The HC’s profiler is also responsible for generating the statistics file.
+The Host Controller's profiler merges the database files produced by its child processes with its own (and with the data gathered on previous runs, if data aggregation is set) and prints the final database file. The HC's profiler is also responsible for generating the statistics file.
 
 The profilers on multiple Host Controllers do not communicate with each other, thus a profiled test system running on multiple hosts will generate multiple database files and statistics files (one of each for every HC).
 
-If more than one host uses the same working directory, then the files generated by the different profilers will overwrite each other. To avoid this clash, certain metacharacters need to be inserted into the database and statistics file names in the configuration file (e.g.: `%h` inserts the host name, or `%p` inserts the HC’s process ID, etc; see <<7-the_run-time_configuration_file.adoc#setting-output-files, here>>).
+If more than one host uses the same working directory, then the files generated by the different profilers will overwrite each other. To avoid this clash, certain metacharacters need to be inserted into the database and statistics file names in the configuration file (e.g.: `%h` inserts the host name, or `%p` inserts the HC's process ID, etc; see <<7-the_run-time_configuration_file.adoc#setting-output-files, here>>).
 
 The `ttcn3_profmerge` tool can be used to merge the database files of multiple Host Controllers and to generate statistics from them.
 
@@ -8768,7 +8773,7 @@ type enumerated MyEnum {
   e_sixth (oct2int('12'O)), // allowed value known at compile time
   e_seventh (2+3), // allowed value known at compile time
   e_eight (c_myint), // allowed value known at compile time
-  e_ninth (f()), // not allowed, functions’ return values are not known at
+  e_ninth (f()), // not allowed, functions' return values are not known at
                  // compile time
 }
 
@@ -8795,7 +8800,7 @@ Restrictions:
 * Port variables can only be constants, variables, templates and var templates.
 * When a port is working in translation mode the `to address` clause is not supported in send operations.
 * Translation functions shall have the `prototype(fast)` extension.
-* Test port parameters from the config file cannot be given with wildcard in the place of the component for ports with translation capability. For example `\*.*.param_name:=”param_value”` will not be passed to the port but with the following line the port parameter will be passed: `system.*.param_name:=”param_value”`
+* Test port parameters from the config file cannot be given with wildcard in the place of the component for ports with translation capability. For example `\*.*.param_name:="param_value"` will not be passed to the port but with the following line the port parameter will be passed: `system.*.param_name:="param_value"`
 * A difference from the dual faced test ports is that the ports with translation capability can work in two modes. In normal mode and in translation mode. In normal mode, the port behaves as a standard messaging port, while in translation mode it works as it is defined in the ES 202 781 standard clause 5.2 (<<13-references.adoc#_21, [21]>>). A test port skeleton must be generated or a real test port implementation should be available for the port type with the translation capability, to ensure that the port can work in both modes. The test port skeleton is not needed when the port with translation capability is marked as `internal` port. In this case the port can only work in translation mode.
 
 Known issues:
@@ -8843,7 +8848,7 @@ function char_to_char(in charstring input, out charstring
     output) {
   port.setstate(4); // charstring messages are discarded
 } with {
-  extension “prototype(fast)”
+  extension "prototype(fast)"
 }
 
 	type port DataPort message map to TransportPort {
-- 
GitLab