Articles

16.NS: Notation Summary


% = percentage

∆% = constant growth rate per payment interval or the percent change from one period to another

= summation symbol meaning to add everything up

ACD = annual cost of the bond debt

AI = accrued interest

AV = assessed value of a property

BAL = principal balance immediately after an annuity payment

BALP1 = principal balance immediately prior to the first payment in a series of annuity payments

BALP2 = principal balance immediately after the last payment in a series of annuity payments

Base = the whole quantity

BVD = book value of the bond debt

C = cost Cash Price = bond price summing market price & accrued interest

CFo = initial cash flow

CM = unit contribution margin

CPI = Consumer Price Index

CPN = nominal bond coupon rate of interest

CR = contribution rate

Current Currency = currency being converted from

CY or C/Y= compounds per year, or compounding frequency

CYNew = the new compounding frequency

CYOld = the old compounding frequency

D$ = discount amount or markdown amount

d = discount rate or markdown rate

Date Price = bond price on the interest payment date preceding the sale date

dec = any number in decimal format

Desired Currency = currency you are converting to

Discount = bond discount amount

E = expenses

Exchange Rate = per unit conversion rate for current currency

Face Value = face value

FV = maturity value or future value in dollars

FVDUE = future value of annuity due

FVORD = future value of ordinary annuity

GAvg = geometric average

GE = gross earnings

I = interest amount in dollars

i = periodic interest rate

iNew = the new periodic interest rate

INT = interest portion

INTDUE = interest portion of a due single annuity payment

INTSFDUE = the interest portion of any single annuity due sinking fund payment

iOld = the old periodic interest rate

IY or I/Y = nominal interest rate per year

L = list price ln = natural logarithm

M$ = markup amount

MM = maintained markup

MoC% = markup on cost percentage

MoS% = markup on selling price percentage

n = a frequency or count or total

N = in merchandising it is the net price; in single payment compound interest it is the number of compound periods; in annuity compound interest it is the number of annuity payments

New = the value that a quantity has become

NI = net income

NPV = net present value

NPVRATIO = net present value equivalent annual ratio

Old = the value that a quantity used to be

P = profit; or principal

PMT = annuity payment amount

PMTBOND = bond annuity payment

Portion = the part of the whole

PPD = purchasing power of a dollar

Premium = bond premium amount

PRI = bond market price

PRN = principal portion

Property Tax = property tax amount

PTR = property tax rate

PV = principal or present value

PVDUE = present value of annuity due

PVORD = present value of ordinary annuity

PY or P/Y = payments per year or payment frequency

r = interest rate (in decimal format)

Rate = portion and base relationship; sales tax rate

Remit = tax remittance

RI = real income

RoC = rate of change per time period

S = regular or unit selling price; or sum of principal/interest

SAvg = simple average

SBE = the selling price at the break-even point

Sonsale = sale price

Stax = selling price including taxes

t = time or term; time ratio

Tax Collected = total tax collected through sales

Tax Paid = total tax paid through purchases

TFC = total fixed costs

TR = total revenue

TVC = total variable cost

VC = unit variable cost

w = weighting factor

WAvg = weighted average

x = any individual piece of data

Years = the term of an annuity


What does NS stand for in reference to threads?

The NS (American National Thread-Special) thread type was absorbed (in a way) into the Unified Series during WW2. Apparently there were/are "subtle", "benign", or "minor" differences between the NS and UNS thread.

Wayne

Gage Crib Worldwide

Re: What does NS stand for in ref. to threads.

SUMMARY
The American National Standard has been obsolete since 1949 and replaced with the Unified National Standard. In all cases, the threads made to the Unified National Standard are designed to screw together with the obsolete American National Standard. Without exception, drawings should be updated to reflect the current standard. The class-of-fit requirements for the obsolete American National Standard can be translated to the current Unified National Standard. As always, obtain approval from your customer.

HISTORY
The American National Standard was replaced with the Unified National Standard for two reasons. The first reason was to provide interchangeability with Canada and United Kingdom. The second reason was to correct certain thread production difficulties.

Thread makers were told to translate the obsolete American National Standard thread requirements on existing drawings to the Unified National Standard using the comparable class-of-fit. There was resistance to change because of the existing inventory of gages. Thread makers were told to use their existing gages until they needed to be replaced and then replace them with the Unified National Standard gages.

TRANSLATION
N changes to UN
NS changes to UNS
NC changes to UNC
NF changes to UNF
NEF changes to UNEF
Class 1 changes to 1A for external threads or 1B for internal threads
Class 2 changes to 2A for external threads or 2B for internal threads
Class 3 changes to 3A for external threads or 3B for internal threads
Class 4 obsolete
Class 5 is still used for interference threads

SUMMARY OF DIFFERENCES
Several changes were made that were specific to nomenclature. Minor changes were made to the general thread form of the end product conform to manufacturing realities. Some benign changes were made relating to the major and minor diameters. Changes were also made to pitch diameters to remove tolerance issues which made the threads nearly impossible to manufacture and gage. Under the obsolete American National Standard the product tolerances were practically absorbed by the combined tool and gage tolerances, leaving little working tolerance in manufacture.


Contents

Permutations called hexagrams were used in China in the I Ching (Pinyin: Yi Jing) as early as 1000 BC.

Al-Khalil (717–786), an Arab mathematician and cryptographer, wrote the Book of Cryptographic Messages. It contains the first use of permutations and combinations, to list all possible Arabic words with and without vowels. [4]

The rule to determine the number of permutations of n objects was known in Indian culture around 1150. The Lilavati by the Indian mathematician Bhaskara II contains a passage that translates to:

The product of multiplication of the arithmetical series beginning and increasing by unity and continued to the number of places, will be the variations of number with specific figures. [5]

In 1677, Fabian Stedman described factorials when explaining the number of permutations of bells in change ringing. Starting from two bells: "first, two must be admitted to be varied in two ways", which he illustrates by showing 1 2 and 2 1. [6] He then explains that with three bells there are "three times two figures to be produced out of three" which again is illustrated. His explanation involves "cast away 3, and 1.2 will remain cast away 2, and 1.3 will remain cast away 1, and 2.3 will remain". [7] He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three. Effectively, this is a recursive process. He continues with five bells using the "casting away" method and tabulates the resulting 120 combinations. [8] At this point he gives up and remarks:

Now the nature of these methods is such, that the changes on one number comprehends the changes on all lesser numbers, . insomuch that a compleat Peal of changes on one number seemeth to be formed by uniting of the compleat Peals on all lesser numbers into one entire body [9]

Stedman widens the consideration of permutations he goes on to consider the number of permutations of the letters of the alphabet and of horses from a stable of 20. [10]

A first case in which seemingly unrelated mathematical questions were studied with the help of permutations occurred around 1770, when Joseph Louis Lagrange, in the study of polynomial equations, observed that properties of the permutations of the roots of an equation are related to the possibilities to solve it. This line of work ultimately resulted, through the work of Évariste Galois, in Galois theory, which gives a complete description of what is possible and impossible with respect to solving polynomial equations (in one unknown) by radicals. In modern mathematics, there are many similar situations in which understanding a problem requires studying certain permutations related to it.

The simplest example of permutations is permutations without repetitions where we consider the number of possible ways of arranging n items into n places. The factorial has special application in defining the number of permutations in a set which does not include repetitions. The number n!, read "n factorial", [11] is precisely the number of ways we can rearrange n things into a new order. For example, if we have three fruits: an orange, apple and pear, we can eat them in the order mentioned, or we can change them (for example, an apple, a pear then an orange). The exact number of permutations is then 3 ! = 1 ⋅ 2 ⋅ 3 = 6 . The number gets extremely large as the number of items (n) goes up.

In mathematics texts it is customary to denote permutations using lowercase Greek letters. Commonly, either α and β , or σ , τ and π are used. [15]

Permutations can be defined as bijections from a set S onto itself. All permutations of a set with n elements form a symmetric group, denoted S n > , where the group operation is function composition. Thus for two permutations, π and σ in the group S n > , the four group axioms hold:

In general, composition of two permutations is not commutative, that is, π σ ≠ σ π .

As a bijection from a set to itself, a permutation is a function that performs a rearrangement of a set, and is not a rearrangement itself. An older and more elementary viewpoint is that permutations are the rearrangements themselves. To distinguish between these two, the identifiers active and passive are sometimes prefixed to the term permutation, whereas in older terminology substitutions and permutations are used. [16]

A permutation can be decomposed into one or more disjoint cycles, that is, the orbits, which are found by repeatedly tracing the application of the permutation on some elements. For example, the permutation σ defined by σ ( 7 ) = 7 has a 1-cycle, ( 7 ) while the permutation π defined by π ( 2 ) = 3 and π ( 3 ) = 2 has a 2-cycle ( 2 3 ) (for details on the syntax, see § Cycle notation below). In general, a cycle of length k, that is, consisting of k elements, is called a k-cycle.

An element in a 1-cycle ( x ) is called a fixed point of the permutation. A permutation with no fixed points is called a derangement. 2-cycles are called transpositions such permutations merely exchange two elements, leaving the others fixed.

Since writing permutations elementwise, that is, as piecewise functions, is cumbersome, several notations have been invented to represent them more compactly. Cycle notation is a popular choice for many mathematicians due to its compactness and the fact that it makes a permutation's structure transparent. It is the notation used in this article unless otherwise specified, but other notations are still widely used, especially in application areas.

Two-line notation Edit

In Cauchy's two-line notation, [17] one lists the elements of S in the first row, and for each one its image below it in the second row. For instance, a particular permutation of the set S = <1,2,3,4,5>can be written as:

this means that σ satisfies σ(1) = 2 , σ(2) = 5 , σ(3) = 4 , σ(4) = 3 , and σ(5) = 1 . The elements of S may appear in any order in the first row. This permutation could also be written as:

One-line notation Edit

Under this assumption, one may omit the first row and write the permutation in one-line notation as

that is, an ordered arrangement of S. [18] [19] Care must be taken to distinguish one-line notation from the cycle notation described below. In mathematics literature, a common usage is to omit parentheses for one-line notation, while using them for cycle notation. The one-line notation is also called the word representation of a permutation. [20] The example above would then be 2 5 4 3 1 since the natural order 1 2 3 4 5 would be assumed for the first row. (It is typical to use commas to separate these entries only if some have two or more digits.) This form is more compact, and is common in elementary combinatorics and computer science. It is especially useful in applications where the elements of S or the permutations are to be compared as larger or smaller.

Cycle notation Edit

Cycle notation describes the effect of repeatedly applying the permutation on the elements of the set. It expresses the permutation as a product of cycles since distinct cycles are disjoint, this is referred to as "decomposition into disjoint cycles". [b]

Since for every new cycle the starting point can be chosen in different ways, there are in general many different cycle notations for the same permutation for the example above one has:

1-cycles are often omitted from the cycle notation, provided that the context is clear for any element x in S not appearing in any cycle, one implicitly assumes σ ( x ) = x . [21] The identity permutation, which consists only of 1-cycles, can be denoted by a single 1-cycle (x), by the number 1, [c] or by id. [22] [23]

A convenient feature of cycle notation is that one can find a permutation's inverse simply by reversing the order of the elements in the permutation's cycles. For example

Canonical cycle notation (a.k.a. standard form) Edit

In some combinatorial contexts it is useful to fix a certain order for the elements in the cycles and of the (disjoint) cycles themselves. Miklós Bóna calls the following ordering choices the canonical cycle notation:

  • in each cycle the largest element is listed first
  • the cycles are sorted in increasing order of their first element

For example, (312)(54)(8)(976) is a permutation in canonical cycle notation. [24] The canonical cycle notation does not omit one-cycles.

Richard P. Stanley calls the same choice of representation the "standard representation" of a permutation. [25] and Martin Aigner uses the term "standard form" for the same notion. [20] Sergey Kitaev also uses the "standard form" terminology, but reverses both choices that is, each cycle lists its least element first and the cycles are sorted in decreasing order of their least, that is, first elements. [26]

Composition of permutations Edit

Some authors prefer the leftmost factor acting first, [28] [29] [30] but to that end permutations must be written to the right of their argument, often as an exponent, where σ acting on x is written x σ then the product is defined by x σ·π = (x σ ) π . However this gives a different rule for multiplying permutations this article uses the definition where the rightmost permutation is applied first.

The concept of a permutation as an ordered arrangement admits several generalizations that are not permutations, but have been called permutations in the literature.

K-permutations of n Edit

A weaker meaning of the term permutation, sometimes used in elementary combinatorics texts, designates those ordered arrangements in which no element occurs more than once, but without the requirement of using all the elements from a given set. These are not permutations except in special cases, but are natural generalizations of the ordered arrangement concept. Indeed, this use often involves considering arrangements of a fixed length k of elements taken from a given set of size n, in other words, these k-permutations of n are the different ordered arrangements of a k-element subset of an n-set (sometimes called variations or arrangements in older literature [d] ). These objects are also known as partial permutations or as sequences without repetition, terms that avoid confusion with the other, more common, meaning of "permutation". The number of such k -permutations of n is denoted variously by such symbols as P k n ^> , n P k P_> , n P k P_> , P n , k > , or P ( n , k ) , and its value is given by the product [31]

which is 0 when k > n , and otherwise is equal to

This usage of the term permutation is closely related to the term combination. A k-element combination of an n-set S is a k element subset of S, the elements of which are not ordered. By taking all the k element subsets of S and ordering each of them in all possible ways, we obtain all the k-permutations of S. The number of k-combinations of an n-set, C(n,k), is therefore related to the number of k-permutations of n by:

These numbers are also known as binomial coefficients and are denoted by ( n k ) >> .

Permutations with repetition Edit

Ordered arrangements of n elements of a set S, where repetition is allowed, are called n-tuples. They have sometimes been referred to as permutations with repetition, although they are not permutations in general. They are also called words over the alphabet S in some contexts. If the set S has k elements, the number of n-tuples over S is k n . .> There is no restriction on how often an element can appear in an n-tuple, but if restrictions are placed on how often an element can appear, this formula is no longer valid.

Permutations of multisets Edit

If M is a finite multiset, then a multiset permutation is an ordered arrangement of elements of M in which each element appears a number of times equal exactly to its multiplicity in M. An anagram of a word having some repeated letters is an example of a multiset permutation. [e] If the multiplicities of the elements of M (taken in some order) are m 1 > , m 2 > , . m l > and their sum (that is, the size of M) is n, then the number of multiset permutations of M is given by the multinomial coefficient, [32]

For example, the number of distinct anagrams of the word MISSISSIPPI is: [33]

A k-permutation of a multiset M is a sequence of length k of elements of M in which each element appears a number of times less than or equal to its multiplicity in M (an element's repetition number).

Circular permutations Edit

Permutations, when considered as arrangements, are sometimes referred to as linearly ordered arrangements. In these arrangements there is a first element, a second element, and so on. If, however, the objects are arranged in a circular manner this distinguished ordering no longer exists, that is, there is no "first element" in the arrangement, any element can be considered as the start of the arrangement. The arrangements of objects in a circular manner are called circular permutations. [34] [f] These can be formally defined as equivalence classes of ordinary permutations of the objects, for the equivalence relation generated by moving the final element of the linear arrangement to its front.

Two circular permutations are equivalent if one can be rotated into the other (that is, cycled without changing the relative positions of the elements). The following two circular permutations on four letters are considered to be the same.

The circular arrangements are to be read counterclockwise, so the following two are not equivalent since no rotation can bring one to the other.

The number of circular permutations of a set S with n elements is (n – 1)!.

The number of permutations of n distinct objects is n !.

The number of n -permutations with k disjoint cycles is the signless Stirling number of the first kind, denoted by c(n, k) . [35]

Permutation type Edit

Conjugating permutations Edit

In general, composing permutations written in cycle notation follows no easily described pattern – the cycles of the composition can be different from those being composed. However the cycle structure is preserved in the special case of conjugating a permutation σ by another permutation π , which means forming the product π σ π − 1 > . Here, π σ π − 1 > is the conjugate of σ and its cycle notation can be obtained by taking the cycle notation for σ and applying π to all the entries in it. [37] It follows that two permutations are conjugate exactly when they have the same type.

Permutation order Edit

Parity of a permutation Edit

Every permutation of a finite set can be expressed as the product of transpositions. [38] Although many such expressions for a given permutation may exist, either they all contain an even or an odd number of transpositions. Thus all permutations can be classified as even or odd depending on this number.

Matrix representation Edit

One can represent a permutation of <1, 2, . n> as an n×n matrix. There are two natural ways to do so, but only one for which multiplications of matrices corresponds to multiplication of permutations in the same order: this is the one that associates to σ the matrix M whose entry Mi,j is 1 if i = σ(j), and 0 otherwise. The resulting matrix has exactly one entry 1 in each column and in each row, and is called a permutation matrix.
Here is a list of these matrices for permutations of 4 elements. The Cayley table on the right shows these matrices for permutations of 3 elements.

Foata's transition lemma Edit

There is a relationship between the one-line and the canonical cycle notation. Consider the permutation ( 2 ) ( 3 1 ) , in canonical cycle notation, if we erase its cycle parentheses, we obtain the permutation ( 2 , 3 , 1 ) in one-line notation. Foata's transition lemma establishes the nature of this correspondence as a bijection on the set of n-permutations (to itself). [39] Richard P. Stanley calls this correspondence the fundamental bijection. [25]

As a first corollary, the number of n-permutations with exactly k left-to-right maxima is also equal to the signless Stirling number of the first kind, c ( n , k ) . Furthermore, Foata's mapping takes an n-permutation with k-weak excedances to an n-permutations with k − 1 ascents. [39] For example, (2)(31) = 321 has two weak excedances (at index 1 and 2), whereas f(321) = 231 has one ascent (at index 1 that is, from 2 to 3).

In some applications, the elements of the set being permuted will be compared with each other. This requires that the set S has a total order so that any two elements can be compared. The set <1, 2, . n> is totally ordered by the usual "≤" relation and so it is the most frequently used set in these applications, but in general, any totally ordered set will do. In these applications, the ordered arrangement view of a permutation is needed to talk about the positions in a permutation.

There are a number of properties that are directly related to the total ordering of S.

Ascents, descents, runs and excedances Edit

An ascent of a permutation σ of n is any position i < n where the following value is bigger than the current one. That is, if σ = σ1σ2. σn, then i is an ascent if σi < σi+1.

For example, the permutation 3452167 has ascents (at positions) 1, 2, 5, and 6.

Similarly, a descent is a position i < n with σi > σi+1, so every i with 1 ≤ i < n either is an ascent or is a descent of σ.

An ascending run of a permutation is a nonempty increasing contiguous subsequence of the permutation that cannot be extended at either end it corresponds to a maximal sequence of successive ascents (the latter may be empty: between two successive descents there is still an ascending run of length 1). By contrast an increasing subsequence of a permutation is not necessarily contiguous: it is an increasing sequence of elements obtained from the permutation by omitting the values at some positions. For example, the permutation 2453167 has the ascending runs 245, 3, and 167, while it has an increasing subsequence 2367.

If a permutation has k − 1 descents, then it must be the union of k ascending runs. [40]

An excedance of a permutation σ1σ2. σn is an index j such that σj > j . If the inequality is not strict (that is, σjj ), then j is called a weak excedance. The number of n-permutations with k excedances coincides with the number of n-permutations with k descents. [42]

Inversions Edit

An inversion of a permutation σ is a pair (i, j) of positions where the entries of a permutation are in the opposite order: i < j and σ i > σ j >sigma _> . [44] So a descent is just an inversion at two adjacent positions. For example, the permutation σ = 23154 has three inversions: (1, 3), (2, 3), and (4, 5), for the pairs of entries (2, 1), (3, 1), and (5, 4).

Sometimes an inversion is defined as the pair of values (σi,σj) whose order is reversed this makes no difference for the number of inversions, and this pair (reversed) is also an inversion in the above sense for the inverse permutation σ −1 . The number of inversions is an important measure for the degree to which the entries of a permutation are out of order it is the same for σ and for σ −1 . To bring a permutation with k inversions into order (that is, transform it into the identity permutation), by successively applying (right-multiplication by) adjacent transpositions, is always possible and requires a sequence of k such operations. Moreover, any reasonable choice for the adjacent transpositions will work: it suffices to choose at each step a transposition of i and i + 1 where i is a descent of the permutation as modified so far (so that the transposition will remove this particular descent, although it might create other descents). This is so because applying such a transposition reduces the number of inversions by 1 as long as this number is not zero, the permutation is not the identity, so it has at least one descent. Bubble sort and insertion sort can be interpreted as particular instances of this procedure to put a sequence into order. Incidentally this procedure proves that any permutation σ can be written as a product of adjacent transpositions for this one may simply reverse any sequence of such transpositions that transforms σ into the identity. In fact, by enumerating all sequences of adjacent transpositions that would transform σ into the identity, one obtains (after reversal) a complete list of all expressions of minimal length writing σ as a product of adjacent transpositions.

The number of permutations of n with k inversions is expressed by a Mahonian number, [45] it is the coefficient of X k in the expansion of the product

which is also known (with q substituted for X) as the q-factorial [n]q! . The expansion of the product appears in Necklace (combinatorics).

Numbering permutations Edit

One way to represent permutations of n is by an integer N with 0 ≤ N < n!, provided convenient methods are given to convert between the number and the representation of a permutation as an ordered arrangement (sequence). This gives the most compact representation of arbitrary permutations, and in computing is particularly attractive when n is small enough that N can be held in a machine word for 32-bit words this means n ≤ 12, and for 64-bit words this means n ≤ 20. The conversion can be done via the intermediate form of a sequence of numbers dn, dn−1, . d2, d1, where di is a non-negative integer less than i (one may omit d1, as it is always 0, but its presence makes the subsequent conversion to a permutation easier to describe). The first step then is to simply express N in the factorial number system, which is just a particular mixed radix representation, where for numbers up to n! the bases for successive digits are n, n − 1 , . 2, 1. The second step interprets this sequence as a Lehmer code or (almost equivalently) as an inversion table.

In the Lehmer code for a permutation σ, the number dn represents the choice made for the first term σ1, the number dn−1 represents the choice made for the second term σ2 among the remaining n − 1 elements of the set, and so forth. More precisely, each dn+1−i gives the number of remaining elements strictly less than the term σi. Since those remaining elements are bound to turn up as some later term σj, the digit dn+1−i counts the inversions (i,j) involving i as smaller index (the number of values j for which i < j and σi > σj). The inversion table for σ is quite similar, but here dn+1−k counts the number of inversions (i,j) where k = σj occurs as the smaller of the two values appearing in inverted order. [46] Both encodings can be visualized by an n by n Rothe diagram [47] (named after Heinrich August Rothe) in which dots at (i,σi) mark the entries of the permutation, and a cross at (i,σj) marks the inversion (i,j) by the definition of inversions a cross appears in any square that comes both before the dot (j,σj) in its column, and before the dot (i,σi) in its row. The Lehmer code lists the numbers of crosses in successive rows, while the inversion table lists the numbers of crosses in successive columns it is just the Lehmer code for the inverse permutation, and vice versa.

To effectively convert a Lehmer code dn, dn−1, . d2, d1 into a permutation of an ordered set S, one can start with a list of the elements of S in increasing order, and for i increasing from 1 to n set σi to the element in the list that is preceded by dn+1−i other ones, and remove that element from the list. To convert an inversion table dn, dn−1, . d2, d1 into the corresponding permutation, one can traverse the numbers from d1 to dn while inserting the elements of S from largest to smallest into an initially empty sequence at the step using the number d from the inversion table, the element from S inserted into the sequence at the point where it is preceded by d elements already present. Alternatively one could process the numbers from the inversion table and the elements of S both in the opposite order, starting with a row of n empty slots, and at each step place the element from S into the empty slot that is preceded by d other empty slots.

Converting successive natural numbers to the factorial number system produces those sequences in lexicographic order (as is the case with any mixed radix number system), and further converting them to permutations preserves the lexicographic ordering, provided the Lehmer code interpretation is used (using inversion tables, one gets a different ordering, where one starts by comparing permutations by the place of their entries 1 rather than by the value of their first entries). The sum of the numbers in the factorial number system representation gives the number of inversions of the permutation, and the parity of that sum gives the signature of the permutation. Moreover, the positions of the zeroes in the inversion table give the values of left-to-right maxima of the permutation (in the example 6, 8, 9) while the positions of the zeroes in the Lehmer code are the positions of the right-to-left minima (in the example positions the 4, 8, 9 of the values 1, 2, 5) this allows computing the distribution of such extrema among all permutations. A permutation with Lehmer code dn, dn−1, . d2, d1 has an ascent ni if and only if didi+1 .

Algorithms to generate permutations Edit

In computing it may be required to generate permutations of a given sequence of values. The methods best adapted to do this depend on whether one wants some randomly chosen permutations, or all permutations, and in the latter case if a specific ordering is required. Another question is whether possible equality among entries in the given sequence is to be taken into account if so, one should only generate distinct multiset permutations of the sequence.

An obvious way to generate permutations of n is to generate values for the Lehmer code (possibly using the factorial number system representation of integers up to n!), and convert those into the corresponding permutations. However, the latter step, while straightforward, is hard to implement efficiently, because it requires n operations each of selection from a sequence and deletion from it, at an arbitrary position of the obvious representations of the sequence as an array or a linked list, both require (for different reasons) about n 2 /4 operations to perform the conversion. With n likely to be rather small (especially if generation of all permutations is needed) that is not too much of a problem, but it turns out that both for random and for systematic generation there are simple alternatives that do considerably better. For this reason it does not seem useful, although certainly possible, to employ a special data structure that would allow performing the conversion from Lehmer code to permutation in O(n log n) time.

Random generation of permutations Edit

For generating random permutations of a given sequence of n values, it makes no difference whether one applies a randomly selected permutation of n to the sequence, or chooses a random element from the set of distinct (multiset) permutations of the sequence. This is because, even though in case of repeated values there can be many distinct permutations of n that result in the same permuted sequence, the number of such permutations is the same for each possible result. Unlike for systematic generation, which becomes unfeasible for large n due to the growth of the number n!, there is no reason to assume that n will be small for random generation.

The basic idea to generate a random permutation is to generate at random one of the n! sequences of integers d1,d2. dn satisfying 0 ≤ di < i (since d1 is always zero it may be omitted) and to convert it to a permutation through a bijective correspondence. For the latter correspondence one could interpret the (reverse) sequence as a Lehmer code, and this gives a generation method first published in 1938 by Ronald Fisher and Frank Yates. [48] While at the time computer implementation was not an issue, this method suffers from the difficulty sketched above to convert from Lehmer code to permutation efficiently. This can be remedied by using a different bijective correspondence: after using di to select an element among i remaining elements of the sequence (for decreasing values of i), rather than removing the element and compacting the sequence by shifting down further elements one place, one swaps the element with the final remaining element. Thus the elements remaining for selection form a consecutive range at each point in time, even though they may not occur in the same order as they did in the original sequence. The mapping from sequence of integers to permutations is somewhat complicated, but it can be seen to produce each permutation in exactly one way, by an immediate induction. When the selected element happens to be the final remaining element, the swap operation can be omitted. This does not occur sufficiently often to warrant testing for the condition, but the final element must be included among the candidates of the selection, to guarantee that all permutations can be generated.

The resulting algorithm for generating a random permutation of a[0], a[1], . a[n − 1] can be described as follows in pseudocode:

This can be combined with the initialization of the array a[i] = i as follows

If di+1 = i, the first assignment will copy an uninitialized value, but the second will overwrite it with the correct value i.

However, Fisher-Yates is not the fastest algorithm for generating a permutation, because Fisher-Yates is essentially a sequential algorithm and "divide and conquer" procedures can achieve the same result in parallel. [49]

Generation in lexicographic order Edit

There are many ways to systematically generate all permutations of a given sequence. [50] One classic, simple, and flexible algorithm is based upon finding the next permutation in lexicographic ordering, if it exists. It can handle repeated values, for which case it generates each distinct multiset permutation once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using the factorial number system) and converting those to permutations. It begins by sorting the sequence in (weakly) increasing order (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back to Narayana Pandita in 14th century India, and has been rediscovered frequently. [51]

The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place.

  1. Find the largest index k such that a[k] < a[k + 1] . If no such index exists, the permutation is the last permutation.
  2. Find the largest index l greater than k such that a[k] < a[l] .
  3. Swap the value of a[k] with that of a[l].
  4. Reverse the sequence from a[k + 1] up to and including the final element a[n].

For example, given the sequence [1, 2, 3, 4] (which is in increasing order), and given that the index is zero-based, the steps are as follows:

  1. Index k = 2, because 3 is placed at an index that satisfies condition of being the largest index that is still less than a[k + 1] which is 4.
  2. Index l = 3, because 4 is the only value in the sequence that is greater than 3 in order to satisfy the condition a[k] < a[l].
  3. The values of a[2] and a[3] are swapped to form the new sequence [1,2,4,3].
  4. The sequence after k-index a[2] to the final element is reversed. Because only one value lies after this index (the 3), the sequence remains unchanged in this instance. Thus the lexicographic successor of the initial state is permuted: [1,2,4,3].

Following this algorithm, the next lexicographic permutation will be [1,3,2,4], and the 24th permutation will be [4,3,2,1] at which point a[k] < a[k + 1] does not exist, indicating that this is the last permutation.

This method uses about 3 comparisons and 1.5 swaps per permutation, amortized over the whole sequence, not counting the initial sort. [52]

Generation with minimal changes Edit

An alternative to the above algorithm, the Steinhaus–Johnson–Trotter algorithm, generates an ordering on all the permutations of a given sequence with the property that any two consecutive permutations in its output differ by swapping two adjacent values. This ordering on the permutations was known to 17th-century English bell ringers, among whom it was known as "plain changes". One advantage of this method is that the small amount of change from one permutation to the next allows the method to be implemented in constant time per permutation. The same can also easily generate the subset of even permutations, again in constant time per permutation, by skipping every other output permutation. [51]

An alternative to Steinhaus–Johnson–Trotter is Heap's algorithm, [53] said by Robert Sedgewick in 1977 to be the fastest algorithm of generating permutations in applications. [50]

The following figure shows the output of all three aforementioned algorithms for generating all permutations of length n = 4 , and of six additional algorithms described in the literature.

  1. Lexicographic ordering
  2. Ehrlich's star-transposition algorithm: [51] in each step, the first entry of the permutation is exchanged with a later entry
  3. Zaks' prefix reversal algorithm: [55] in each step, a prefix of the current permutation is reversed to obtain the next permutation
  4. Sawada-Williams' algorithm: [56] each permutation differs from the previous one either by a cyclic left-shift by one position, or an exchange of the first two entries
  5. Corbett's algorithm: [57] each permutation differs from the previous one by a cyclic left-shift of some prefix by one position
  6. Single-track ordering: [58] each column is a cyclic shift of the other columns
  7. Single-track Gray code: [58] each column is a cyclic shift of the other columns, plus any two consecutive permutations differ only in one or two transpositions.

Meandric permutations Edit

Meandric systems give rise to meandric permutations, a special subset of alternate permutations. An alternate permutation of the set <1, 2, . 2n> is a cyclic permutation (with no fixed points) such that the digits in the cyclic notation form alternate between odd and even integers. Meandric permutations are useful in the analysis of RNA secondary structure. Not all alternate permutations are meandric. A modification of Heap's algorithm has been used to generate all alternate permutations of order n (that is, of length 2n) without generating all (2n)! permutations. [59] [ unreliable source? ] Generation of these alternate permutations is needed before they are analyzed to determine if they are meandric or not.

The algorithm is recursive. The following table exhibits a step in the procedure. In the previous step, all alternate permutations of length 5 have been generated. Three copies of each of these have a "6" added to the right end, and then a different transposition involving this last entry and a previous entry in an even position is applied (including the identity that is, no transposition).

Previous sets Transposition of digits Alternate permutations
1-2-3-4-5-6 1-2-3-4-5-6
4, 6 1-2-3-6-5-4
2, 6 1-6-3-4-5-2
1-2-5-4-3-6 1-2-5-4-3-6
4, 6 1-2-5-6-3-4
2, 6 1-6-5-4-3-2
1-4-3-2-5-6 1-4-3-2-5-6
2, 6 1-4-3-6-5-2
4, 6 1-6-3-2-5-4
1-4-5-2-3-6 1-4-5-2-3-6
2, 6 1-4-5-6-3-2
4, 6 1-6-5-2-3-4

Applications Edit

Permutations are used in the interleaver component of the error detection and correction algorithms, such as turbo codes, for example 3GPP Long Term Evolution mobile telecommunication standard uses these ideas (see 3GPP technical specification 36.212 [60] ). Such applications raise the question of fast generation of permutations satisfying certain desirable properties. One of the methods is based on the permutation polynomials. Also as a base for optimal hashing in Unique Permutation Hashing. [61]


Contents

The Nashville Number System, (also referred to as NNS) is similar to (movable-do) Solfège, which uses "Do Ré Mi Fa Sol La Ti" to represent the seven scale degrees of the Major scale. It is also similar to roman numeral analysis however, the NNS instead uses Arabic numerals to represent each of the scale degrees.

In the key of C, the numbers would correspond as follows:

Nashville numerical notation 1 2 3 4 5 6 7
So-Fa names/Solfège Do Mi Fa So La Ti
Common musical notation C D E F G A B


In the key of B ♭ , the numbers would be B ♭ =1, C=2, D=3, E ♭ =4, F=5, G=6, A=7.

The key may be specified at the top of the written chord chart, or given orally by the bandleader, record producer or lead singer. The numbers do not change when transposing the composition into another key. They are simply relative to the new root note. The only knowledge required is to know the major scale for the given key. Unless otherwise notated, all numbers represent major chords, and each chord should be played for one measure.

So in the key of C, the Nashville Number System notation:

represents a four-bar phrase, in which the band would play a C major chord (one bar), an F major chord (one bar), a C major chord (one bar), and a G major chord (one bar).

Here is an example of how two four bar phrases can be formed to create a section of a song.

NNS Played in key of C Played in key of G
Verse) Verse) Verse)

Accidentals modifying a scale degree are usually written to the left of the number. ♭ 7 ("flat 7") represents a B ♭ major chord in the key of C, or an A ♭ major chord in the key of B ♭ , or an F major chord in the key of G.

A number by itself represents the enharmonic triad (music) on the scale degree:

Nashville numerical notation 1 2 3 4 5 6 7
Chord type (major key) major minor minor major major minor diminished
Chord type (minor key) minor diminished major minor minor major major
Chord type (harmonic minor key) minor diminished augmented minor major major diminished

If the song includes other chords besides these triads, additional notation is needed.

If a chord root is not in the scale, the symbols ♭ or ♯ can be added. In the key of C major, an E ♭ triad would be notated as ♭ 3.

Minor chords (outside the key) are noted with a dash after the number or a lower case m, In the key of D, 1 is D major, 1- or 1m would be D minor. Similarly a major chord can be noted with a capital M. In the key of C, 2 is D minor, 2M is D major.

Other chord qualities such as major sevenths, suspended chords, and dominant sevenths use familiar symbols: 4 Δ 7 5 sus 5 7 1 would stand for F Δ 7 G sus G 7 C in the key of C, or E ♭ Δ 7 F sus F 7 B ♭ in the key of B ♭ . A 2 means "add 2" or "add 9".

Chord inversions and chords with other altered bass notes are notated analogously to regular slash chord notation. In the key of C, C/E (C major first inversion, with E bass) is written as 1/3 G/B is written as 5/7 A/G (an inversion of A7) is written as 6M/5 F/G (F major with G bass) is 4/5. Just as with simple chords, the numbers refer to scale degrees specifically, the scale degree number used for the bass note is that of the note's position in the tonic's scale (as opposed to, for example, that of its position in the scale of the chord being played). In the key of B ♭ , 1/3 stands for B ♭ /D, 5/7 stands for F/A, 6/5 stands for Gm/F, and 4/5 stands for E ♭ /F.

Depending on the style, chord inversions can be used to achieve an alternative sound when using the Nashville Number System. Some examples that could be applied to practice is the 1-4-5 chord progression. Lets say we use the C minor chord and couple it with an inverted F Minor chord to a G diminished. The sound attributes changes subtly. This can be effective when musicians are attempting to capture a specific emotion or sound.

Chord qualities Edit

NNS charts use unique rhythmic symbols as well, and variations in practice exist. A diamond shape around a number indicates that the chord should be held out or allowed to ring as a whole note. Conversely, the marcato symbol ^ over the number, or a staccato dot underneath, indicates that the chord should be immediately choked or stopped. The "push" symbol ("<" and ">" are both used) syncopates the indicated chord, moving its attack back one eighth note, to the preceding "and". A sequence of several chords in a single measure is notated by underlining the desired chord numbers. (Some charts use parentheses or a box for this.) If two numbers are underlined it is assumed that the chord values are even. In 4/4 time that would mean the first chord would be played for two beats and the second chord would be played for two beats. 2- 5 1 means a minor 2 chord for two beats, then a 5 chord for two beats, then a 1 chord for four beats. If the measure is not evenly divided, beats can be indicated by dots or hash marks over the chord numbers. Three dots over a given chord would tell the musician to play that chord for three beats. Alternatively, rhythmic notation can be used.


16.NS: Notation Summary

Harmonica RHYTHM Notation

"Simplicity is the ultimate sophistication." - Leo da Vinci (an early blues artist)

OverviewDetailSummaryTabsLinks

Standard Music has 13 core rhythm symbols / modifiers within a 5 line staff structure.
BeatTab uses less than 5 core rhythm symbols / modifiers (with no staff).
Typical Harp tab has zero rhythm symbols — but also zero rhythm.

Easy - Compact - Precise

- BeatTab Notation SUMMARY -

A good notation should be easy, consistent, compact and precise, guiding one to improvise.

Hole: Hole numbers as 1,2,3 … 8,9,0 (using 0 for 10th hole) (A, B … for holes 11,12…).
Bar (measure): Use forward slash ( / ) - /4 5 6 5 /5 6 4 - / = 2 bars of 4 beats each.

Rest: An ( x ) or period ( . ) both show a rest or stop. (x for downbeats, period for upbeats).
Duration of note or rest: See section on rhythm per modifiers and context bracketing.
Direction of airflow: See Pitch Modifiers below (basically Draws plain & Blows underlined).
Percussive Effect (cough, slap, etc - basically a substitute hole): P with explanation.
Power Modifier / Dynamics: Use bolding per volume or emphasis. - ( /x 24,3'2 x / ).
Last Bar Repeat: /2 3 4 5 / / / = Repeat previous bar two more times (total of 3 times).
Specific Bar Repeat: /2 3 4 5 /5 5 4 2 /1st /2nd / = repeat 1st bar and then the 2nd bar.
Phrase Repeat: 5 /5 5 4 2 >>/ = play bracketed section 2 times (total of 4 bars).

Modifiers: Always placed AFTER the note in the order of:
Pitch (direction & bending), Chording (note combinations), then Rhythm (durations).
— Use of Chording and/or Rhythm modifiers is optional as per the detail desired.

Direction of airflow: Draw holes are plain ( 3 4 ), Blows underlined ( 3 4 ) (or 3+ 4+ for web).
Bend: ( ' '' ''') = single/double/triple bends ( 2' , 3'' , 3''' or 8' ) - degree of bend per context.
Dip Bend: ( ` ) - (4` = Start on bent 4' and quickly swoop UP to the unbent 4).
Dip Bend (down): ( v ) - (1v = Start on the unbent 1 draw and quickly bend DOWN to 1').
Slide / Gliss ( > ) From one note to another: ( 3>554 ) - Durations determined per context.

Overbend: ( * ) - (6* = 6 hole overblow) (7* = 7 hole overdraw). (Use 6+* if using + modifier)
Chromatic Sharp: ( # ) - (2# = chromatic slide/button pressed/engaged to sharp the 2 draw).
Chromatic Natural: ( $ ) If ALL notes played with slide in, $ means slide OUT (un-sharped).
Vibrato: (

) - See notes within the rhythm modifier section.

Chord: ( c ) Placed after the LOWER hole - (2c = 23 together - often with just a hint of the 3).
Chord Smile: ( s ) After CENTER hole (3s = 234 together) - ie: 3 draw and “smile”.

Chord Tongue Block: ( t ) After main HIGHER note - (4t = 14 together - tongue covers 23).
Wide Tongue Block: ( T ) After main HIGHER note - (8T = 48 together - tongue covers 567).

Repeated Chords: Repeat the modifier - (3ss = 3s3s) and similar for 3cc, 4ttt, 8TT.
Custom Chords: Enclose with round ( ) brackets - (2345) = holes 2345 all played together.

Trill / Shake: ( = ) Placed AFTER 1st lower note - (4= is 4545).

Reference Duration (BEAT): (1/4 note duration normally) - ie: The counted beats in the bar.
Default Duration: (1/8 note normally) - All unmodified notes are 1/2 the duration of the BEAT.
Time Signature (Meter): (4/4 time normally - 4 beats of 1/4 notes per measure - Time=4/4).
Tempo: (eg: 70 bpm) - The number of beats per minute (of the 1/4 BEAT note).


3. RHYTHM (Duration) MODIFIERS:

Dashes add default duration to itself - eg: 1/8 notes become 1/4 notes - for downbeats .
Spaces add default duration to itself - eg: 1/8 notes become 1/4 notes - for upbeats .
More Spaces & Dashes add MORE 1/8 durations - (/2 - - - / is 1/8 + 7 more 1/8's =8/8=1).

Square Brackets [ ]'s allow squeezing of notes to fit within one beat. (context-based).
Square Brackets: [ ] = BEAT (duration of 1/4 note) - eg: [23] or [234] or [2343].
[23] indicates the same duration as 23 alone so [ ]'s only being used to clarify the grouping.
[234] would indicate a triplet (with 3 notes played in the same duration context as 23 alone).
[2343] being faster 1/16 notes (4 notes played in the same duration context as 23 alone).

Explicit swing can be shown via [hole-space-hole] eg: [2 3] per context, OR by 2,3 whereas:
Commas add about 1/3 to the preceding note duration & removes 1/3 from the next note.
Swing: Comma's show explicit swing rhythm (delayed notes) - (/2,23,34,45 / showing swing).
Downbeats are BEFORE commas (semi-colons) & swung upbeats are AFTER commas.

Or with the [ ]’s included for beat clarification, the above is: /[2,2][3,3][4,4]5 /.
(Basically the commas are placeholders for the unplayed middle triplet beat).
Semi-colons are similar to Commas but create a staccato effect eg: 23 (vs. 2,3) See Detail .

Downbeat Rest duration: An ( x ) indicates a 1/8 rest (a stop if after note) for a downbeat .
Upbeat Rest duration: A period ( . ) indicates a 1/8 rest (a stop if after note) for an upbeat .
More x’s, Periods, Spaces & Dashes add MORE 1/8 durations to the rest similar to notes.

Slurs: Mostly per context - Slurs are often assumed unless "rested" with an ( x ) or ( . ).
Vibrato Tilde: (

) - Has a duration equal to a Dash - add with spaces for longer vibrato.


MISCELLANEOUS NOTES: (key, emphasis, volume, tempo, style, tone, etc) - as needed.

Key = Position x Harp: shows key and position in a compact manner - see below:

C=1xC harp - shows key of C in 1st position on a C harp (straight or Ionian mode).
G=2xC harp - shows key of G in 2nd position on a C harp (cross or Mixolydian mode).
Dm=3xC harp - shows key of D(m) in 3rd position on C harp (slant or Dorian mode).
F=12xC harp - shows key of F in 12th position on C harp (Lydian mode).

Lick: I, IV, V (tonic, dominate, subdominate = indicates type of lick (G, C, D if in key of G).
Structure: I=, IV=, V= per 1st, 2nd, 3rd lines of each of the four bar sets for a 12 bar blues.
Timing within videos: show the second ( =0:23 sec or simply =0.23 )


The above is the overall. Go to Detail to better discover its consistency, precision and compactness.

Author of BeatTab Harmonica Notation:
Grant Dukeshire of Calgary, Alberta


More advanced text formatting.

Makes a preformatted block of text with no syntax highlighting. All the optional parameters of macro are valid for too.

  • nopanel: Embraces a block of text within a fully customizable panel. The optional parameters you can define are the following ones:

Embraces a block of text within a fully customizable panel. The optional parameters you can define are the following ones:

  • title: Title of the panel
  • borderStyle: The style of the border this panel uses (solid, dashed and other valid CSS border styles)
  • borderColor: The color of the border this panel uses
  • borderWidth: The width of the border this panel uses
  • bgColor: The background color of this panel
  • titleBGColor: The background color of the title section of this panel

Makes a preformatted block of code with syntax highlighting. All the optional parameters of macro are valid for too. The default language is Java but you can specify others too, including ActionScript, Ada, AppleScript, bash, C, C#, C++, CSS, Erlang, Go, Groovy, Haskell, HTML, JavaScript, JSON, Lua, Nyan, Objc, Perl, PHP, Python, R, Ruby, Scala, SQL, Swift, VisualBasic, XML and YAML.


Introduction to Econometrics with R

In the multiple regression model we extend the three least squares assumptions of the simple regression model (see Chapter 4) and add a fourth assumption. These assumptions are presented in Key Concept 6.4. We will not go into the details of assumptions 1-3 since their ideas generalize easy to the case of multiple regressors. We will focus on the fourth assumption. This assumption rules out perfect correlation between regressors.

Key Concept 6.4

The Least Squares Assumptions in the Multiple Regression Model

The multiple regression model is given by

[ Y_i = eta_0 + eta_1 X_ <1i>+ eta_1 X_ <2i>+ dots + eta_k X_ + u_i , i=1,dots,n. ]

The OLS assumptions in the multiple regression model are an extension of the ones made for the simple regression model:

  1. Regressors ((X_<1i>, X_<2i>, dots, X_, Y_i) , i=1,dots,n) , are drawn such that the i.i.d. assumption holds.
  2. (u_i) is an error term with conditional mean zero given the regressors, i.e., [ E(u_ivert X_<1i>, X_<2i>, dots, X_) = 0. ]
  3. Large outliers are unlikely, formally (X_<1i>,dots,X_) and (Y_i) have finite fourth moments.
  4. No perfect multicollinearity.

Multicollinearity

Multicollinearity means that two or more regressors in a multiple regression model are strongly correlated. If the correlation between two or more regressors is perfect, that is, one regressor can be written as a linear combination of the other(s), we have perfect multicollinearity. While strong multicollinearity in general is unpleasant as it causes the variance of the OLS estimator to be large (we will discuss this in more detail later), the presence of perfect multicollinearity makes it impossible to solve for the OLS estimator, i.e., the model cannot be estimated in the first place.

The next section presents some examples of perfect multicollinearity and demonstrates how lm() deals with them.

Examples of Perfect Multicollinearity

How does R react if we try to estimate a model with perfectly correlated regressors?

lm will produce a warning in the first line of the coefficient section of the output (1 not defined because of singularities) and ignore the regressor(s) which is (are) assumed to be a linear combination of the other(s). Consider the following example where we add another variable FracEL, the fraction of English learners, to CASchools where observations are scaled values of the observations for english and use it as a regressor together with STR and english in a multiple regression model. In this example english and FracEL are perfectly collinear. The R code is as follows.

The row FracEL in the coefficients section of the output consists of NA entries since FracEL was excluded from the model.

If we were to compute OLS by hand, we would run into the same problem but no one would be helping us out! The computation simply fails. Why is this? Take the following example:

Assume you want to estimate a simple linear regression model with a constant and a single regressor (X) . As mentioned above, for perfect multicollinearity to be present (X) has to be a linear combination of the other regressors. Since the only other regressor is a constant (think of the right hand side of the model equation as (eta_0 imes 1 + eta_1 X_i + u_i) so that (eta_1) is always multiplied by (1) for every observation), (X) has to be constant as well. For (hateta_1) we have

The variance of the regressor (X) is in the denominator. Since the variance of a constant is zero, we are not able to compute this fraction and (hat<eta>_1) is undefined.

Note: In this special case the denominator in (6.7) equals zero, too. Can you show that?

Let us consider two further examples where our selection of regressors induces perfect multicollinearity. First, assume that we intend to analyze the effect of class size on test score by using a dummy variable that identifies classes which are not small ( (NS) ). We define that a school has the (NS) attribute when the school’s average student-teacher ratio is at least (12) ,

We add the corresponding column to CASchools and estimate a multiple regression model with covariates computer and english.

Again, the output of summary(mult.mod) tells us that inclusion of NS in the regression would render the estimation infeasible. What happened here? This is an example where we made a logical mistake when defining the regressor NS: taking a closer look at (NS) , the redefined measure for class size, reveals that there is not a single school with (STR<12) hence (NS) equals one for all observations. We can check this by printing the contents of CASchools$NS or by using the function table(), see ?table .

CASchools$NS is a vector of (420) ones and our data set includes (420) observations. This obviously violates assumption 4 of Key Concept 6.4: the observations for the intercept are always (1) ,

[egin intercept = , & lambda cdot NS end]

[egin egin 1 vdots 1 end = , & lambda cdot egin 1 vdots 1 end Leftrightarrow , & lambda = 1. end]

Since the regressors can be written as a linear combination of each other, we face perfect multicollinearity and R excludes NS from the model. Thus the take-away message is: think carefully about how the regressors in your models relate!

Another example of perfect multicollinearity is known as the dummy variable trap. This may occur when multiple dummy variables are used as regressors. A common case for this is when dummies are used to sort the data into mutually exclusive categories. For example, suppose we have spatial information that indicates whether a school is located in the North, West, South or East of the U.S. This allows us to create the dummy variables

Since the regions are mutually exclusive, for every school (i=1,dots,n) we have [ North_i + West_i + South_i + East_i = 1. ]

We run into problems when trying to estimate a model that includes a constant and all four direction dummies in the model, e.g., [ TestScore = eta_0 + eta_1 imes STR + eta_2 imes english + eta_3 imes North_i + eta_4 imes West_i + eta_5 imes South_i + eta_6 imes East_i + u_i ag<6.8>] since then for all observations (i=1,dots,n) the constant term is a linear combination of the dummies:

[egin intercept = , & lambda_1 cdot (North + West + South + East) egin 1 vdots 1end = , & lambda_1 cdot egin 1 vdots 1end Leftrightarrow , & lambda_1 = 1 end]

and we have perfect multicollinearity. Thus the “dummy variable trap” means not paying attention and falsely including exhaustive dummies and a constant in a regression model.

How does lm() handle a regression like (6.8)? Let us first generate some artificial categorical data and append a new column named directions to CASchools and see how lm() behaves when asked to estimate the model.

Notice that R solves the problem on its own by generating and including the dummies directionNorth, directionSouth and directionWest but omitting directionEast. Of course, the omission of every other dummy instead would achieve the same. Another solution would be to exclude the constant and to include all dummies instead.

Does this mean that the information on schools located in the East is lost? Fortunately, this is not the case: exclusion of directEast just alters the interpretation of coefficient estimates on the remaining dummies from absolute to relative. For example, the coefficient estimate on directionNorth states that, on average, test scores in the North are about (1.61) points higher than in the East.

A last example considers the case where a perfect linear relationship arises from redundant regressors. Suppose we have a regressor (PctES) , the percentage of English speakers in the school where

and both (PctES) and (PctEL) are included in a regression model. One regressor is redundant since the other one conveys the same information. Since this obviously is a case where the regressors can be written as linear combination, we end up with perfect multicollinearity, again.

Once more, lm() refuses to estimate the full model using OLS and excludes PctES.

See Chapter 18.1 of the book for an explanation of perfect multicollinearity and its consequences to the OLS estimator in general multiple regression models using matrix notation.

Imperfect Multicollinearity

As opposed to perfect multicollinearity, imperfect multicollinearity is — to a certain extent — less of a problem. In fact, imperfect multicollinearity is the reason why we are interested in estimating multiple regression models in the first place: the OLS estimator allows us to isolate influences of correlated regressors on the dependent variable. If it was not for these dependencies, there would not be a reason to resort to a multiple regression approach and we could simply work with a single-regressor model. However, this is rarely the case in applications. We already know that ignoring dependencies among regressors which influence the outcome variable has an adverse effect on estimation results.

So when and why is imperfect multicollinearity a problem? Suppose you have the regression model

[ Y_i = eta_0 + eta_1 X_ <1i>+ eta_2 X_ <2i>+ u_i ag <6.9>]

and you are interested in estimating (eta_1) , the effect on (Y_i) of a one unit change in (X_<1i>) , while holding (X_<2i>) constant. You do not know that the true model indeed includes (X_2) . You follow some reasoning and add (X_2) as a covariate to the model in order to address a potential omitted variable bias. You are confident that (E(u_ivert X_<1i>, X_<2i>)=0) and that there is no reason to suspect a violation of the assumptions 2 and 3 made in Key Concept 6.4. If (X_1) and (X_2) are highly correlated, OLS struggles to precisely estimate (eta_1) . That means that although (hateta_1) is a consistent and unbiased estimator for (eta_1) , it has a large variance due to (X_2) being included in the model. If the errors are homoskedastic, this issue can be better understood from the formula for the variance of (hateta_1) in the model (6.9) (see Appendix 6.2 of the book):

First, if ( ho_=0) , i.e., if there is no correlation between both regressors, including (X_2) in the model has no influence on the variance of (hateta_1) . Secondly, if (X_1) and (X_2) are correlated, (sigma^2_) is inversely proportional to (1- ho^2_) so the stronger the correlation between (X_1) and (X_2) , the smaller is (1- ho^2_) and thus the bigger is the variance of (hateta_1) . Thirdly, increasing the sample size helps to reduce the variance of (hateta_1) . Of course, this is not limited to the case with two regressors: in multiple regressions, imperfect multicollinearity inflates the variance of one or more coefficient estimators. It is an empirical question which coefficient estimates are severely affected by this and which are not. When the sample size is small, one often faces the decision whether to accept the consequence of adding a large number of covariates (higher variance) or to use a model with only few regressors (possible omitted variable bias). This is called bias-variance trade-off.

In sum, undesirable consequences of imperfect multicollinearity are generally not the result of a logical error made by the researcher (as is often the case for perfect multicollinearity) but are rather a problem that is linked to the data used, the model to be estimated and the research question at hand.

Simulation Study: Imperfect Multicollinearity

Let us conduct a simulation study to illustrate the issues sketched above.

  1. We use (6.9) as the data generating process and choose (eta_0 = 5) , (eta_1 = 2.5) and (eta_2 = 3) and (u_i) is an error term distributed as (mathcal(0,5)) . In a first step, we sample the regressor data from a bivariate normal distribution: [ X_i = (X_<1i>, X_<2i>) oversetmathcal left[egin 0 0 end, egin 10 & 2.5 2.5 & 10 end ight] ] It is straightforward to see that the correlation between (X_1) and (X_2) in the population is rather low:

Next, we estimate the model (6.9) and save the estimates for (eta_1) and (eta_2) . This is repeated (10000) times with a for loop so we end up with a large number of estimates that allow us to describe the distributions of (hateta_1) and (hateta_2) .

We repeat steps 1 and 2 but increase the covariance between (X_1) and (X_2) from (2.5) to (8.5) such that the correlation between the regressors is high: [ ho_ = fracsqrt>> = frac<8.5> <10>= 0.85 ]

In order to assess the effect on the precision of the estimators of increasing the collinearity between (X_1) and (X_2) we estimate the variances of (hateta_1) and (hateta_2) and compare.

We are interested in the variances which are the diagonal elements. We see that due to the high collinearity, the variances of (hateta_1) and (hateta_2) have more than tripled, meaning it is more difficult to precisely estimate the true coefficients.


16.NS: Notation Summary

ElementNSImpl inherits from ElementImpl and adds namespace support.

The qualified name is the node name, and we store localName which is also used in all queries. On the other hand we recompute the prefix when necessary.

Field Summary
protected java.lang.String localName
DOM2: localName.
protected java.lang.String namespaceURI
DOM2: Namespace URI.
Fields inherited from class org.apache.xerces.dom.ElementImpl
attributes, name
Fields inherited from class org.apache.xerces.dom.ParentNode
fCachedChild, fCachedChildIndex, fCachedLength, firstChild, ownerDocument
Fields inherited from class org.apache.xerces.dom.ChildNode
nextSibling, previousSibling
Fields inherited from class org.apache.xerces.dom.NodeImpl
ELEMENT_DEFINITION_NODE, FIRSTCHILD, flags, HASSTRING, IGNORABLEWS, OWNED, ownerNode, READONLY, SPECIFIED, SYNCCHILDREN, SYNCDATA, UNNORMALIZED
Fields inherited from interface org.w3c.dom.Node
ATTRIBUTE_NODE, CDATA_SECTION_NODE, COMMENT_NODE, DOCUMENT_FRAGMENT_NODE, DOCUMENT_NODE, DOCUMENT_TYPE_NODE, ELEMENT_NODE, ENTITY_NODE, ENTITY_REFERENCE_NODE, NOTATION_NODE, PROCESSING_INSTRUCTION_NODE, TEXT_NODE
Constructor Summary
protected ElementNSImpl (CoreDocumentImpl ownerDocument, java.lang.String value)
protected ElementNSImpl (CoreDocumentImpl ownerDocument, java.lang.String namespaceURI, java.lang.String qualifiedName)
DOM2: Constructor for Namespace implementation.
Method Summary
java.lang.String getLocalName ()
Introduced in DOM Level 2.
java.lang.String getNamespaceURI ()
Introduced in DOM Level 2.
java.lang.String getPrefix ()
Introduced in DOM Level 2.
void setPrefix (java.lang.String prefix)
Introduced in DOM Level 2.
Methods inherited from class org.apache.xerces.dom.ElementImpl
cloneNode, getAttribute, getAttributeNode, getAttributeNodeNS, getAttributeNS, getAttributes, getDefaultAttributes, getElementsByTagName, getElementsByTagNameNS, getNodeName, getNodeType, getTagName, hasAttribute, hasAttributeNS, hasAttributes, normalize, reconcileDefaultAttributes, removeAttribute, removeAttributeNode, removeAttributeNS, setAttribute, setAttributeNode, setAttributeNodeNS, setAttributeNS, setReadOnly, setupDefaultAttributes, synchronizeData
Methods inherited from class org.apache.xerces.dom.ParentNode
getChildNodes, getChildNodesUnoptimized, getFirstChild, getLastChild, getLength, getOwnerDocument, hasChildNodes, insertBefore, item, removeChild, replaceChild, synchronizeChildren
Methods inherited from class org.apache.xerces.dom.ChildNode
getNextSibling, getParentNode, getPreviousSibling
Methods inherited from class org.apache.xerces.dom.NodeImpl
addEventListener, appendChild, changed, changes, dispatchEvent, getNodeValue, getReadOnly, getUserData, isSupported, removeEventListener, setNodeValue, setUserData, toString
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface org.w3c.dom.Node
appendChild, getChildNodes, getFirstChild, getLastChild, getNextSibling, getNodeValue, getOwnerDocument, getParentNode, getPreviousSibling, hasChildNodes, insertBefore, isSupported, removeChild, replaceChild, setNodeValue

NamespaceURI

LocalName

ElementNSImpl

ElementNSImpl

GetNamespaceURI

The namespace URI of this node, or null if it is unspecified.

This is not a computed value that is the result of a namespace lookup based on an examination of the namespace declarations in scope. It is merely the namespace URI given at creation time.

For nodes created with a DOM Level 1 method, such as createElement from the Document interface, this is null. Overrides: getNamespaceURI in class NodeImpl Since: WD-DOM-Level-2-19990923

GetPrefix

The namespace prefix of this node, or null if it is unspecified.

For nodes created with a DOM Level 1 method, such as createElement from the Document interface, this is null.

Overrides: getPrefix in class NodeImpl Since: WD-DOM-Level-2-19990923

SetPrefix

Note that setting this attribute changes the nodeName attribute, which holds the qualified name, as well as the tagName and name attributes of the Element and Attr interfaces, when applicable.

Overrides: setPrefix in class NodeImpl Throws: INVALID_CHARACTER_ERR - Raised if the specified prefix contains an invalid character. Since: WD-DOM-Level-2-19990923


Sigma Notation (Summation Notation)

The Sigma symbol, , is a capital letter in the Greek alphabet. It corresponds to “S” in our alphabet, and is used in mathematics to describe “summation”, the addition or sum of a bunch of terms (think of the starting sound of the word “sum”: Sssigma = Sssum).

The Sigma symbol can be used all by itself to represent a generic sum… the general idea of a sum, of an unspecified number of unspecified terms:

But this is not something that can be evaluated to produce a specific answer, as we have not been told how many terms to include in the sum, nor have we been told how to determine the value of each term.

A more typical use of Sigma notation will include an integer below the Sigma (the “starting term number”), and an integer above the Sigma (the “ending term number”). In the example below, the exact starting and ending numbers don’t matter much since we are being asked to add the same value, two, repeatedly. All that matters in this case is the difference between the starting and ending term numbers… that will determine how many twos we are being asked to add, one two for each term number.

Sigma notation, or as it is also called, summation notation is not usually worth the extra ink to describe simple sums such as the one above… multiplication could do that more simply.

Sigma notation is most useful when the “term number” can be used in some way to calculate each term. To facilitate this, a variable is usually listed below the Sigma with an equal sign between it and the starting term number. This variable is called the “index variable”. If the index variable appears in the expression being summed, then the current term number should be substituted for the index variable:

Note that it is possible to have an index variable below the Sigma, but never use it. In such cases, just as in the example that resulted in a bunch of twos above, the term being added never changes:

The “starting term number” need not be 1. It can be any value, including 0. For example:

That covers what you need to know to begin working with Sigma notation. However, since Sigma notation will usually have more complex expressions after the Sigma symbol, here are some further examples to give you a sense of what is possible:

Note that the last example above illustrates that, using the commutative property of addition, a sum of multiple terms can be broken up into multiple sums:

And lastly, this notation can be nested:

The rightmost sigma (similar to the innermost function when working with composed functions) above should be evaluated first. Once that has been evaluated, you can evaluate the next sigma to the left. Parentheses can also be used to make the order of evaluation clear.

Summary

Sigma (summation) notation is used in mathematics to indicate repeated addition. Sigma notation provides a compact way to represent many sums, and is used extensively when working with Arithmetic or Geometric Series.

To make use of it, you will need a “closed form” expression (one that allows you to describe each term’s value using the term number) that describes all terms in the sum (just as you often do when working with sequences and series). Sigma notation saves much paper and ink, as do other math notations, and allow fairly complex ideas to be described in a relatively compact notation.


Meta-analyses evaluating surrogate endpoints for overall survival in cancer randomized trials: A critical review

In cancer randomized controlled trials (RCT), alternative endpoints are increasingly being used in place of overall survival (OS) to reduce sample size, duration and cost of trials. It is necessary to ensure that these endpoints are valid surrogates for OS. Our aim was to identify meta-analyses that evaluated surrogate endpoints for OS and assess the strength of evidence for each meta-analysis (MA).

Materials and methods

We performed a systematic review to identify MA of cancer RCTs assessing surrogate endpoints for OS. We evaluated the strength of the association between the endpoints based on (i) the German Institute of Quality and Efficiency in Health Care guidelines and (ii) the Biomarker-Surrogate Evaluation Schema.

Results

Fifty-three publications reported on 164 MA, with heterogeneous statistical methods Disease-free survival (DFS) and progression-free survival (PFS) showed good surrogacy properties for OS in colorectal, lung and head and neck cancers. DFS was highly correlated to OS in gastric cancer.

Conclusion(s)

The statistical methodology used to evaluate surrogate endpoints requires consistency in order to facilitate the accurate interpretation of the results. Despite the limited number of clinical settings with validated surrogate endpoints for OS, there is evidence of good surrogacy for DFS and PFS in tumor types that account for a large proportion of cancer cases.