Articles

3.1: Prelude to Differential Equations - Mathematics


Many real-world phenomena can be modeled mathematically by using differential equations. Population growth, radioactive decay, predator-prey models, and spring-mass systems are four examples of such phenomena. In this chapter we study some of these applications. Suppose we wish to study a population of deer over time and determine the total number of animals in a given area. We can first observe the population over a period of time, estimate the total number of deer, and then use various assumptions to derive a mathematical model for different scenarios. Some factors that are often considered are environmental impact, threshold population values, and predators. In this chapter we see how differential equations can be used to predict populations over time.

Another goal of this chapter is to develop solution techniques for different types of differential equations. As the equations become more complicated, the solution techniques also become more complicated, and in fact an entire course could be dedicated to the study of these equations. In this chapter we study several types of differential equations and their corresponding methods of solution.


3.1: Prelude to Differential Equations - Mathematics

This section is intended to be a catch all for many of the basic concepts that are used occasionally in working with systems of differential equations. There will not be a lot of details in this section, nor will we be working large numbers of examples. Also, in many cases we will not be looking at the general case since we won’t need the general cases in our differential equations work.

Let’s start with some of the basic notation for matrices. An (n imes m) (this is often called the size or dimension of the matrix) matrix is a matrix with (n) rows and (m) columns and the entry in the (i^< ext>) row and (j^< ext>) column is denoted by (a_). A short hand method of writing a general (n imes m) matrix is the following.

The size or dimension of a matrix is subscripted as shown if required. If it’s not required or clear from the problem the subscripted size is often dropped from the matrix.

Special Matrices

There are a few “special” matrices out there that we may use on occasion. The first special matrix is the square matrix. A square matrix is any matrix whose size (or dimension) is (n imes n). In other words, it has the same number of rows as columns. In a square matrix the diagonal that starts in the upper left and ends in the lower right is often called the main diagonal.

The next two special matrices that we want to look at are the zero matrix and the identity matrix. The zero matrix, denoted (0_), is a matrix all of whose entries are zeroes. The identity matrix is a square (n imes n) matrix, denoted (I_), whose main diagonals are all 1’s and all the other elements are zero. Here are the general zero and identity matrices.

In matrix arithmetic these two matrices will act in matrix work like zero and one act in the real number system.

The last two special matrices that we’ll look at here are the column matrix and the row matrix. These are matrices that consist of a single column or a single row. In general, they are,

We will often refer to these as vectors.

Arithmetic

We next need to take a look at arithmetic involving matrices. We’ll start with addition and subtraction of two matrices. So, suppose that we have two (n imes m) matrices, (A) and (B). The sum (or difference) of these two matrices is then,

The sum or difference of two matrices of the same size is a new matrix of identical size whose entries are the sum or difference of the corresponding entries from the original two matrices. Note that we can’t add or subtract entries with different sizes.

Next, let’s look at scalar multiplication. In scalar multiplication we are going to multiply a matrix (A) by a constant (sometimes called a scalar) (alpha ). In this case we get a new matrix whose entries have all been multiplied by the constant, (alpha ).

There isn’t much to do here other than the work.

We first multiplied all the entries of (B) by 5 then subtracted corresponding entries to get the entries in the new matrix.

The final matrix operation that we’ll take a look at is matrix multiplication. Here we will start with two matrices, (A_) and (B_

). Note that (A) must have the same number of columns as (B) has rows. If this isn’t true, then we can’t perform the multiplication. If it is true, then we can perform the following multiplication.

The new matrix will have size (n imes m) and the entry in the (i^< ext>) row and (j^< ext>) column, (c_), is found by multiplying row (i) of matrix (A) by column (j) of matrix (B). This doesn’t always make sense in words so let’s look at an example.

The new matrix will have size (2 imes 4). The entry in row 1 and column 1 of the new matrix will be found by multiplying row 1 of (A) by column 1 of (B). This means that we multiply corresponding entries from the row of (A) and the column of (B) and then add the results up. Here are a couple of the entries computed all the way out.

[egin> & = left( 2 ight)left( 1 ight) + left( < - 1> ight)left( < - 4> ight) + left( 0 ight)left( 0 ight) = 6 > & = left( 2 ight)left( < - 1> ight) + left( < - 1> ight)left( 1 ight) + left( 0 ight)left( 0 ight) = - 3 > & = left( < - 3> ight)left( 2 ight) + left( 6 ight)left( 0 ight) + left( 1 ight)left( < - 2> ight) = - 8end]

Here’s the complete solution.

In this last example notice that we could not have done the product BA since the number of columns of (B) does not match the number of row of (A). It is important to note that just because we can compute (AB) doesn’t mean that we can compute (BA). Likewise, even if we can compute both (AB) and (BA) they may or may not be the same matrix.

Determinant

The next topic that we need to take a look at is the determinant of a matrix. The determinant is actually a function that takes a square matrix and converts it into a number. The actual formula for the function is somewhat complex and definitely beyond the scope of this review.

The main method for computing determinants of any square matrix is called the method of cofactors. Since we are going to be dealing almost exclusively with (2 imes 2) matrices and the occasional (3 imes 3) matrix we won’t go into the method here. We can give simple formulas for each of these cases. The standard notation for the determinant of the matrix (A) is.

[det left( A ight) = left| A ight|]

Here are the formulas for the determinant of (2 imes 2) and (3 imes 3) matrices.

For the (2 imes 2) there isn’t much to do other than to plug it into the formula.

[det left( A ight) = left| <egin<*<20>>< - 9>&< - 18>2&4end> ight| = left( < - 9> ight)left( 4 ight) - left( < - 18> ight)left( 2 ight) = 0]

For the (3 imes 3) we could plug it into the formula, however unlike the (2 imes 2) case this is not an easy formula to remember. There is an easier way to get the same result. A quicker way of getting the same result is to do the following. First write down the matrix and tack a copy of the first two columns onto the end as follows.

Now, notice that there are three diagonals that run from left to right and three diagonals that run from right to left. What we do is multiply the entries on each diagonal up and the if the diagonal runs from left to right we add them up and if the diagonal runs from right to left we subtract them.

Here is the work for this matrix.

[egindet left( B ight) & = left| <egin<*<20>>2&3&1< - 1>&< - 6>&74&5&< - 1>end> ight|,,,,egin<*<20>>2&3< - 1>&< - 6>4&5end & = left( 2 ight)left( < - 6> ight)left( < - 1> ight) + left( 3 ight)left( 7 ight)left( 4 ight) + left( 1 ight)left( < - 1> ight)left( 5 ight) - & hspace<0.25in>hspace<0.25in>hspace<0.25in>left( 3 ight)left( < - 1> ight)left( < - 1> ight) - left( 2 ight)left( 7 ight)left( 5 ight) - left( 1 ight)left( < - 6> ight)left( 4 ight) & = 42end]

You can either use the formula or the short cut to get the determinant of a (3 imes 3).

If the determinant of a matrix is zero we call that matrix singular and if the determinant of a matrix isn’t zero we call the matrix nonsingular. The (2 imes 2) matrix in the above example was singular while the (3 imes 3) matrix is nonsingular.

Matrix Inverse

Next, we need to take a look at the inverse of a matrix. Given a square matrix, (A), of size n x (n) if we can find another matrix of the same size, (B) such that,

then we call (B) the inverse of (A) and denote it by (B=A^<-1>).

Computing the inverse of a matrix, (A), is fairly simple. First, we form a new matrix,

and then use the row operations from the previous section and try to convert this matrix into the form,

If we can then (B) is the inverse of (A). If we can’t then there is no inverse of the matrix (A).

We first form the new matrix by tacking on the (3 imes 3) identity matrix to this matrix. This is

We will now use row operations to try and convert the first three columns to the (3 imes 3) identity. In other words, we want a 1 on the diagonal that starts at the upper left corner and zeroes in all the other entries in the first three columns.

If you think about it, this process is very similar to the process we used in the last section to solve systems, it just goes a little farther. Here is the work for this problem.

So, we were able to convert the first three columns into the (3 imes 3) identity matrix therefore the inverse exists and it is,

So, there was an example in which the inverse did exist. Let’s take a look at an example in which the inverse doesn’t exist.

In this case we will tack on the (2 imes 2) identity to get the new matrix and then try to convert the first two columns to the (2 imes 2) identity matrix.

And we don’t need to go any farther. In order for the (2 imes 2) identity to be in the first two columns we must have a 1 in the second entry of the second column and a 0 in the second entry of the first column. However, there is no way to get a 1 in the second entry of the second column that will keep a 0 in the second entry in the first column. Therefore, we can’t get the (2 imes 2) identity in the first two columns and hence the inverse of (B) doesn’t exist.

We will leave off this discussion of inverses with the following fact.

I’ll leave it to you to verify this fact for the previous two examples.

Systems of Equations Revisited

We need to do a quick revisit of systems of equations. Let’s start with a general system of equations.

Now, covert each side into a vector to get,

The left side of this equation can be thought of as a matrix multiplication.

Simplifying up the notation a little gives,

where, (vec x) is a vector whose components are the unknowns in the original system of equations. We call (eqref) the matrix form of the system of equations (eqref) and solving (eqref) is equivalent to solving (eqref). The solving process is identical. The augmented matrix for (eqref) is

Once we have the augmented matrix we proceed as we did with a system that hasn’t been written in matrix form.

We also have the following fact about solutions to (eqref).

Given the system of equation (eqref) we have one of the following three possibilities for solutions.

    There will be no solutions.

In fact, we can go a little farther now. Since we are assuming that we’ve got the same number of equations as unknowns the matrix (A) in (eqref) is a square matrix and so we can compute its determinant. This gives the following fact.

Given the system of equations in (eqref) we have the following.

    If (A) is nonsingular then there will be exactly one solution to the system.

The matrix form of a homogeneous system is

where (vec 0) is the vector of all zeroes. In the homogeneous system we are guaranteed to have a solution, (vec x = vec 0). The fact above for homogeneous systems is then,

Given the homogeneous system (eqref) we have the following.

    If (A) is nonsingular then the only solution will be (vec x = vec 0).

Linear Independence/Linear Dependence

This is not the first time that we’ve seen this topic. We also saw linear independence and linear dependence back when we were looking at second order differential equations. In that section we were dealing with functions, but the concept is essentially the same here. If we start with (n) vectors,

If we can find constants, (c_<1>), (c_<2>), …, (c_) with at least two nonzero such that

then we call the vectors linearly dependent. If the only constants that work in (eqref) are (c_<1>=0), (c_<2>)=0, …, (c_=0) then we call the vectors linearly independent.

If we further make the assumption that each of the (n) vectors has (n) components, i.e. each of the vectors look like,

we can get a very simple test for linear independence and linear dependence. Note that this does not have to be the case, but in all of our work we will be working with (n) vectors each of which has (n) components.

Given the (n) vectors each with (n) components,

So, the matrix (X) is a matrix whose (i^< ext>) column is the (i^< ext>) vector, (). Then,

    If (X) is nonsingular (i.e. (det(X)) is not zero) then the (n) vectors are linearly independent, and

where (vec c) is a vector containing the constants in (eqref).

So, the first thing to do is to form (X) and compute its determinant.

This matrix is non singular and so the vectors are linearly independent.

As with the last example first form (X) and compute its determinant.

So, these vectors are linearly dependent. We now need to find the relationship between the vectors. This means that we need to find constants that will make (eqref) true.

So, we need to solve the system

Here is the augmented matrix and the solution work for this system.

Now, we would like actual values for the constants so, if use ( = 3) we get the following solution( = - 2),( = 1), and ( = 3). The relationship is then.

Calculus with Matrices

There really isn’t a whole lot to this other than to just make sure that we can deal with calculus with matrices.

First, to this point we’ve only looked at matrices with numbers as entries, but the entries in a matrix can be functions as well. So, we can look at matrices in the following form,

Now we can talk about differentiating and integrating a matrix of this form. To differentiate or integrate a matrix of this form all we do is differentiate or integrate the individual entries.

So, when we run across this kind of thing don’t get excited about it. Just differentiate or integrate as we normally would.

In this section we saw a very condensed set of topics from linear algebra. When we get back to differential equations many of these topics will show up occasionally and you will at least need to know what the words mean.

The main topic from linear algebra that you must know however if you are going to be able to solve systems of differential equations is the topic of the next section.


Contents

Damodar Dharmananda Kosambi was born at Kosben in Portuguese Goa to the Buddhist scholar Dharmananda Damodar Kosambi. After a few years of schooling in India, in 1918, Damodar and his elder sister, Manik travelled to Cambridge, Massachusetts with their father, who had taken up a teaching position at the Cambridge Latin School. [5] Their father was tasked by Professor Charles Rockwell Lanman of Harvard University to complete compiling a critical edition of Visuddhimagga, a book on Buddhist philosophy, which was originally started by Henry Clarke Warren. There, the young Damodar spent a year in a Grammar school and then was admitted to the Cambridge High and Latin School in 1920. He became a member of the Cambridge branch of American Boy Scouts.

It was in Cambridge that he befriended another prodigy of the time, Norbert Wiener, whose father Leo Wiener was the elder Kosambi's colleague at Harvard University. Kosambi excelled in his final school examination and was one of the few candidates who was exempt on the basis of merit from necessarily passing an entrance examination essential at the time to gain admission to Harvard University. He enrolled in Harvard in 1924, but eventually postponed his studies, and returned to India. He stayed with his father who was now working in the Gujarat University, and was in the close circles of Mahatma Gandhi.

In January 1926, Kosambi returned to the US with his father, who once again studied at Harvard University for a year and half. Kosambi studied mathematics under George David Birkhoff, who wanted him to concentrate on mathematics, but the ambitious Kosambi instead took many diverse courses excelling in each of them. In 1929, Harvard awarded him the Bachelor of Arts degree with a summa cum laude. He was also granted membership to the esteemed Phi Beta Kappa Society, the oldest undergraduate honours organisation in the United States. He returned to India soon after.

He obtained the post of professor at the Banaras Hindu University (BHU), teaching German alongside mathematics. He struggled to pursue his research on his own, and published his first research paper, "Precessions of an Elliptic Orbit" in the Indian Journal of Physics in 1930.

In 1931, Kosambi married Nalini from the wealthy Madgaonkar family. It was in this year that he was hired by mathematician André Weil, then Professor of Mathematics at Aligarh Muslim University, to the post of lecturership in mathematics at Aligarh. [6] His other colleagues at Aligarh included Vijayraghavan. During his two years stay in Aligarh, he produced eight research papers in the general area of Differential Geometry and Path Spaces. His fluency in several European languages allowed him to publish some of his early papers in French, Italian and German journals in their respective languages.

— From Exasperating Essays: Exercises in Dialectical Method (1957)

Mathematics Edit

In 1932, he joined the Deccan Education Society's Fergusson College in Pune, where he taught mathematics for 14 years. [7] In 1935, his eldest daughter, Maya was born, while in 1939, the youngest, Meera.

In 1944 he published a small article of 4 pages titled The Estimation of Map Distance from Recombination Values in Annals of Eugenics, in which he introduced what later came to be known as Kosambi map function. According to his equation, genetic map distance (w) is related to recombination fraction (θ) in the following way:

Kosambi's mapping function adjusts the map distance based on interference which changes the proportion of double crossovers.(To know more about this you can explore the given website https://www.academia.edu/665254/Kosambi_and_the_genetic_mapping_function (edit: Bhaskarlal Datta)

One of the most important contributions of Kosambi to statistics is the widely known technique called proper orthogonal decomposition (POD). Although it was originally developed by Kosambi in 1943, it is now referred to as the Karhunen–Loève expansion. In the 1943 paper entitled 'Statistics in Function Space' presented in the Journal of the Indian Mathematical Society, Kosambi presented the Proper Orthogonal Decomposition some years before Karhunen (1945) and Loeve (1948). This tool has found application to such diverse fields as image processing, signal processing, data compression, oceanography, chemical engineering and fluid mechanics. Unfortunately this most important contribution of his is barely acknowledged in most papers that utilise the POD method. In recent years though, some authors have indeed referred to it as the Kosambi-Karhunen-Loeve decomposition. [8]

Historical studies Edit

Until 1939, Kosambi was almost exclusively focused on mathematical research, but later, he gradually started foraying into social sciences. [7] It was his studies in numismatics that initiated him into the field of historical research. He did extensive research in difficult science of numismatics. His evaluation of data was by modern statistical methods. [9] For example, he statistically analyzed the weight of thousands of punch-marked coins from different Indian museums to establish their chronological sequence and put forward his theories about the economic conditions under which these coins could have been minted. [7]

Sanskrit Edit

He made a thorough study of Sanskrit and ancient literature, and he started his classic work on the ancient poet Bhartṛhari. He published exemplary critical editions of Bhartrihari's Śatakatraya and Subhashitas during 1945–1948.

Activism Edit

It was during this period that he started his political activism, coming close to the radical streams in the ongoing Independence movement, especially the Communist Party of India. He became an outspoken Marxist and wrote some political articles. [ citation needed ]

In the 1940s, Homi J. Bhabha invited Kosambi to join the Tata Institute of Fundamental Research (TIFR). [ citation needed ] Kosambi joined TIFR as Chair for Mathematics in 1946, and held the position for the next 16 years. He continued to live in his own house in Pune, and commute to Mumbai every day by the Deccan Queen train. [10]

After independence, in 1948–49 he was sent to England and to the US as a UNESCO Fellow to study the theoretical and technical aspects of computing machines. In London, he started his long-lasting friendship with Indologist and historian A.L. Basham. In the spring semester of 1949, he was a visiting professor of geometry in the Mathematics Department at the University of Chicago, where his colleague from his Harvard days, Marshall Harvey Stone, was the chair. In April–May 1949, he spent nearly two months at the Institute for Advanced Study in Princeton, New Jersey, discussing with such illustrious physicists and mathematicians as J. Robert Oppenheimer, Hermann Weyl, John von Neumann, Marston Morse, Oswald Veblen and Carl Ludwig Siegel amongst others.

After his return to India, in the Cold War circumstances, he was increasingly drawn into the World Peace Movement and served as a Member of the World Peace Council. He became a tireless crusader for peace, campaigning against the nuclearisation of the world. Kosambi's solution to India's energy needs was in sharp conflict with the ambitions of the Indian ruling class. He proposed alternative energy sources, like solar power. His activism in the peace movement took him to Beijing, Helsinki and Moscow. However, during this period he relentlessly pursued his diverse research interests, too. Most importantly, he worked on his Marxist rewriting of ancient Indian history, which culminated in his book, An Introduction to the Study of Indian History (1956).

He visited China many times during 1952–62 and was able to watch the Chinese revolution very closely, making him critical of the way modernisation and development were envisaged and pursued by the Indian ruling classes. All these contributed to straining his relationship with the Indian government and Bhabha, eventually leading to Kosambi's exit from the Tata Institute of Fundamental Research in 1962.

His exit from the TIFR gave Kosambi the opportunity to concentrate on his research in ancient Indian history culminating in his book, The Culture and Civilisation of Ancient India, which was published in 1965 by Routledge, Kegan & Paul. The book was translated into German, French and Japanese and was widely acclaimed. He also utilised his time in archaeological studies, and contributed in the field of statistics and number theory. His article on numismatics was published in February 1965 in Scientific American.

Due to the efforts of his friends and colleagues, in June 1964, Kosambi was appointed as a Scientist Emeritus of the Council of Scientific and Industrial Research (CSIR) affiliated with the Maharashtra Vidnyanvardhini in Pune. He pursued many historical, scientific and archaeological projects (even writing stories for children). But most works he produced in this period could not be published during his lifetime.

Kosambi died of myocardial infarction in the early hours of 29 June 1966, after being declared generally fit by his family doctor on the previous day. [5]

He was posthumously decorated with the Hari Om Ashram Award by the government of India's University Grant Commission in 1980.

His friend A.L. Basham, a well-known indologist, wrote in his obituary:

At first it seemed that he had only three interests, which filled his life to the exclusion of all others — ancient India, in all its aspects, mathematics and the preservation of peace. For the last, as well as for his two intellectual interests, he worked hard and with devotion, according to his deep convictions. Yet as one grew to know him better one realized that the range of his heart and mind was very wide. In the later years of his life, when his attention turned increasingly to anthropology as a means of reconstructing the past, it became more than ever clear that he had a very deep feeling for the lives of the simple people of Maharashtra. [11]

— From Exasperating Essays: Exercises in Dialectical Method (1957)

Although Kosambi was not a practising historian, he wrote four books and sixty articles on history: these works had a significant impact on the field of Indian historiography. [12] He understood history in terms of the dynamics of socio-economic formations rather than just a chronological narration of "episodes" or the feats of a few great men – kings, warriors or saints. In the very first paragraph of his classic work, An Introduction to the Study of Indian History, he gives an insight into his methodology as a prelude to his life work on ancient Indian history:

"The light-hearted sneer “India has had some episodes, but no history“ is used to justify lack of study, grasp, intelligence on the part of foreign writers about India’s past. The considerations that follow will prove that it is precisely the episodes — lists of dynasties and kings, tales of war and battle spiced with anecdote, which fill school texts — that are missing from Indian records. Here, for the first time, we have to reconstruct a history without episodes, which means that it cannot be the same type of history as in the European tradition." [13]

According to A. L. Basham, "An Introduction to the Study of Indian History is in many respects an epoch making work, containing brilliantly original ideas on almost every page if it contains errors and misrepresentations, if now and then its author attempts to force his data into a rather doctrinaire pattern, this does not appreciably lessen the significance of this very exciting book, which has stimulated the thought of thousands of students throughout the world." [11]

Professor Sumit Sarkar says: "Indian Historiography, starting with D.D. Kosambi in the 1950s, is acknowledged the world over – wherever South Asian history is taught or studied – as quite on a par with or even superior to all that is produced abroad." [14]

In his obituary of Kosambi published in Nature, J. D. Bernal had summed up Kosambi's talent as follows: "Kosambi introduced a new method into historical scholarship, essentially by application of modern mathematics. By statistical study of the weights of the coins, Kosambi was able to establish the amount of time that had elapsed while they were in circulation and so set them in order to give some idea of their respective ages."

Kosambi is an inspiration to many across the world, especially to Sanskrit philologists [15] and Marxist scholars. He deeply influenced Indian historiography. [16] The Government of Goa has instituted the annual D.D. Kosambi Festival of Ideas since February 2008 to commemorate his birth centenary. [17]

Historian Irfan Habib said, "D. D. Kosambi and R.S. Sharma, together with Daniel Thorner, brought peasants into the study of Indian history for the first time." [18]

India Post issued a commemorative postage stamp on 31 July 2008 to honour Kosambi. [20] [21]

Works on history and society Edit

  • 1956 An Introduction to the Study of Indian History (Popular Book Depot, Bombay)
  • 1957 Exasperating Essays: Exercise in the Dialectical Method (People's Book House, Poona)
  • 1962 Myth and Reality: Studies in the Formation of Indian Culture (Popular Prakashail, Bombay)
  • 1965 The Culture and Civilisation of Ancient India in Historical Outline (Routledge & Kegan Paul, London)
  • 1981 Indian Numismatics (Orient Blackswan, New Delhi)
  • 2002 D.D. Kosambi: Combined Methods in Indology and Other Writings – Compiled, edited and introduced by Brajadulal Chattopadhyaya (Oxford University Press, New Delhi). Pdf on archive.org
  • 2009 The Oxford India Kosambi – Compiled, edited and introduced by Brajadulal Chattopadhyaya (Oxford University Press, New Delhi)
  • 2014 Unsettling The Past, edited by Meera Kosambi (Permanent Black, Ranikhet)
  • 2016 Adventures into the Unknown: Essays, edited by Ram Ramaswamy (Three Essays Collective, New Delhi)

Edited works Edit

  • 1945 The Satakatrayam of Bhartrhari with the Comm. of Ramarsi, edited in collaboration with Pt. K. V. Krishnamoorthi Sharma (Anandasrama Sanskrit Series, No.127, Poona)
  • 1946 The Southern Archetype of Epigrams Ascribed to Bhartrhari (Bharatiya Vidya Series 9, Bombay) (First critical edition of a Bhartrhari recension.)
  • 1948 The Epigrams Attributed to Bhartrhari (Singhi Jain Series 23, Bombay) (Comprehensive edition of the poet's work remarkable for rigorous standards of text criticism.)
  • 1952 The Cintamani-saranika of Dasabala Supplement to Journal of Oriental Research, xix, pt, II (Madras) (A Sanskrit astronomical work which shows that King Bhoja of Dhara died in 1055–56.)
  • 1957 The Subhasitaratnakosa of Vidyakara, edited in collaboration with V.V. Gokhale (Harvard Oriental Series 42)

In addition to the papers listed below, Kosambi wrote two books in mathematics, the manuscripts of which have not been traced. The first was a book on path geometry that was submitted to Marston Morse in the mid-1940s and the second was on prime numbers, submitted shortly before his death. Unfortunately, neither book was published. The list of articles below is complete but does not include his essays on science and scientists, some of which have appeared in the collection Science, Society, and Peace (People's Publishing House, 1995). Four articles (between 1962 and 1965) are written under the pseudonym S. Ducray.

  • 1930 Precessions of an elliptical orbit, Indian Journal of Physics, 5, 359–364
  • 1931 On a generalization of the second theorem of Bourbaki, Bulletin of the Academy of Sciences, U. P., 1, 145–147
  • 1932 Modern differential geometries, Indian Journal of Physics, 7, 159–164
  • 1932 On differential equations with the group property, Journal of the Indian Mathematical Society, 19, 215–219
  • 1932 Geometrie differentielle et calcul des variations, Rendiconti della Reale Accademia Nazionale dei Lincei, 16, 410–415 (in French)
  • 1932 On the existence of a metric and the inverse variational problem, Bulletin of the Academy of Sciences, U. P., 2, 17–28
  • 1932 Affin-geometrische Grundlagen der Einheitlichen Feld–theorie, Sitzungsberichten der Preussische Akademie der Wissenschaften, Physikalisch-mathematische klasse, 28, 342–345 (in German)
  • 1933 Parallelism and path-spaces, Mathematische Zeitschrift, 37, 608–618
  • 1933 The problem of differential invariants, Journal of the Indian Mathematical Society, 20, 185–188
  • 1933 The classification of integers, Journal of the University of Bombay, 2, 18–20
  • 1934 Collineations in path-space, Journal of the Indian Mathematical Society, 1, 68–72
  • 1934 Continuous groups and two theorems of Euler, The Mathematics Student, 2, 94–100
  • 1934 The maximum modulus theorem, Journal of the University of Bombay, 3, 11–12
  • 1935 Homogeneous metrics, Proceedings of the Indian Academy of Sciences, 1, 952–954
  • 1935 An affine calculus of variations, Proceedings of the Indian Academy of Sciences, 2, 333–335
  • 1935 Systems of differential equations of the second order, Quarterly Journal of Mathematics (Oxford), 6, 1–12
  • 1936 Differential geometry of the Laplace equation, Journal of the Indian Mathematical Society, 2, 141–143
  • 1936 Path-spaces of higher order, Quarterly Journal of Mathematics (Oxford), 7, 97–104
  • 1936 Path-geometry and cosmogony, Quarterly Journal of Mathematics (Oxford), 7, 290–293
  • 1938 Les metriques homogenes dans les espaces cosmogoniques , Comptes rendus de l’Acad ́emie des Sciences, 206, 1086–1088 (in French)
  • 1938 Les espaces des paths generalises qu’on peut associer avec un espace de Finsler, Comptes rendus de l’Acad ́emie des Sciences, 206, 1538–1541 (in French)
  • 1939 The tensor analysis of partial differential equations, Journal of the Indian Mathematical Society, 3, 249–253 (1939) Japanese version of this article in Tensor, 2, 36–39
  • 1940 A statistical study of the weights of the old Indian punch-marked coins, Current Science, 9, 312–314
  • 1940 On the weights of old Indian punch-marked coins, Current Science, 9, 410–411
  • 1940 Path-equations admitting the Lorentz group, Journal of the London Mathematical Society, 15, 86–91
  • 1940 The concept of isotropy in generalized path-spaces, Journal of the Indian Mathematical Society, 4, 80–88
  • 1940 A note on frequency distribution in series, The Mathematics Student, 8, 151–155
  • 1941 A bivariate extension of Fisher's Z–test, Current Science, 10, 191–192
  • 1941 Correlation and time series, Current Science, 10, 372–374
  • 1941 Path-equations admitting the Lorentz group–II, Journal of the Indian Mathematical Society, 5, 62–72
  • 1941 On the origin and development of silver coinage in India, Current Science, 10, 395–400
  • 1942 On the zeros and closure of orthogonal functions, Journal of the Indian Mathematical Society, 6, 16–24
  • 1942 The effect of circulation upon the weight of metallic currency, Current Science, 11, 227–231
  • 1942 A test of significance for multiple observations, Current Science, 11, 271–274
  • 1942 On valid tests of linguistic hypotheses, New Indian Antiquary, 5, 21–24
  • 1943 Statistics in function space, Journal of the Indian Mathematical Society, 7, 76–88
  • 1944 The estimation of map distance from recombination values, Annals of Eugenics, 12, 172–175
  • 1944 Direct derivation of Balmer spectra, Current Science, 13, 71–72
  • 1944 The geometric method in mathematical statistics, American Mathematical Monthly, 51, 382–389
  • 1945 Parallelism in the tensor analysis of partial differential equations, Bulletin of the American Mathematical Society, 51, 293–296
  • 1946 The law of large numbers, The Mathematics Student, 14, 14–19
  • 1946 Sur la differentiation covariante, Comptes rendus de l’Acad ́emie des Sciences, 222, 211–213 (in French)
  • 1947 An extension of the least–squares method for statistical estimation, Annals of Eugenics, 18, 257–261
  • 1947 Possible Applications of the Functional Calculus, Proceedings of the 34th Indian Science Congress. Part II: Presidential Addresses, 1–13
  • 1947 Les invariants differentiels d’un tenseur covariant a deux indices, Comptes rendus de l’Acad ́emie des Sciences, 225, 790–92 (in French)
  • 1948 Systems of partial differential equations of the second order, Quarterly Journal of Mathematics (Oxford), 19, 204–219
  • 1949 Characteristic properties of series distributions, Proceedings of the National Institute of Science of India, 15, 109–113
  • 1949 Lie rings in path-space, Proceedings of the National Academy of Sciences (USA), 35, 389–394
  • 1949 The differential invariants of a two-index tensor, Bulletin of the American Mathematical Society, 55, 90–94
  • 1951 Series expansions of continuous groups, Quarterly Journal of Mathematics (Oxford, Series 2), 2, 244–257
  • 1951 Seasonal variations in the Indian birth–rate, Annals of Eugenics, 16, 165–192 (with S. Raghavachari)
  • 1952 Path-spaces admitting collineations, Quarterly Journal of Mathematics (Oxford, Series 2), 3, 1–11
  • 1952 Path-geometry and continuous groups, Quarterly Journal of Mathematics (Oxford, Series 2), 3, 307–320
  • 1954 Seasonal variations in the Indian death–rate, Annals of Human Genetics, 19, 100–119 (with S. Raghavachari)
  • 1954 The metric in path-space, Tensor (New Series), 3, 67–74
  • 1957 The method of least–squares, Advancement in Mathematics, 3, 485–491 (in Chinese)
  • 1958 Classical Tauberian theorems, Journal of the Indian Society of Agricultural Statistics, 10, 141–149
  • 1958 The efficiency of randomization by card–shuffling, Journal of the Royal Statistics Society, 121, 223–233 (with U. V. R. Rao)
  • 1959 The method of least–squares, Journal of the Indian Society of Agricultural Statistics, 11, 49–57
  • 1959 An application of stochastic convergence, Journal of the Indian Society of Agricultural Statistics, 11, 58–72
  • 1962 A note on prime numbers, Journal of the University of Bombay, 31, 1–4 (as S. Ducray)
  • 1963 The sampling distribution of primes, Proceedings of the National Academy of Sciences (USA), 49, 20–23
  • 1963 Normal Sequences, Journal of the University of Bombay, 32, 49–53 (as S. Ducray)
  • 1964 Statistical methods in number theory, Journal of the Indian Society of Agricultural Statistics, 16, 126–135
  • 1964 Probability and prime numbers, Proceedings of the Indian Academy of Sciences, 60, 159–164 (as S. Ducray)
  • 1965 The sequence of primes, Proceedings of the Indian Academy of Sciences, 62, 145–149 (as S. Ducray)
  • 1966 Numismatics as a Science, Scientific American, February 1966, pages 102–111
  • 2016 Selected Works in Mathematics and Statistics, ed. Ramakrishna Ramaswamy, Springer. (Posthumous publication)
  1. ^ Vinod, K.K. (June 2011). "Kosambi and the genetic mapping function". Resonance. 16 (6): 540–550. doi:10.1007/s12045-011-0060-x. S2CID84289582.
  2. ^
  3. Raju, C.K. (2009), "Kosambi the Mathematician", Economic and Political Weekly, 44 (20): 33–45
  4. ^
  5. Kosambi, D. D. (1943), "Statistics in Function Space", Journal of the Indian Mathematical Society, 7: 76–88, MR0009816
  6. ^ ab
  7. Sreedharan, E. (2004). A Textbook of Historiography: 500 BC to AD 2000. Orient Blackswan. p. 469. ISBN978-81-250-2657-0 .
  8. ^ abV. V. Gokhale 1974, p. 1.
  9. ^
  10. Weil, André Gage, Jennifer C (1992). The apprenticeship of a mathematician. Basel, Switzerland: Birkhäuser Verlag. ISBN9783764326500 . OCLC24791768.
  11. ^ abcV. V. Gokhale 1974, p. 2.
  12. ^
  13. Steward, Jeff (20 May 2009). The Solution of a Burgers' Equation Inverse Problem with Reduced-Order Modeling Proper Orthogonal Decomposition (Master's thesis). Tallahassee, Florida: Florida State University. Archived from the original on 15 December 2017 . Retrieved 15 December 2017 .
  14. ^
  15. Sreedharan, E. (2007). A Manual of Historical Research Methodology. Thiruvananthapuram, India: Centre for South Indian Studies. ISBN9788190592802 . Archived from the original on 26 August 2017 . Retrieved 16 October 2016 .
  16. ^V. V. Gokhale 1974, p. 3.
  17. ^ ab
  18. Basham, A. L. et al. (1974). " ' Baba': A Personal Tribute". In Sharma, Ram Sharan (ed.). Indian society: historical probings, in memory of D. D. Kosambi. New Delhi, India: People's Publishing House. pp. 16–19. OCLC3206457.
  19. ^
  20. R. S. Sharma (1974) [1958]. "Preface". Indian Society: Historical Probings in memory of D. D. Kosambi. Indian Council of Historical Research / People's Publishing House. p. vii. ISBN978-81-7007-176-1 .
  21. ^
  22. Kosambi, Damodar Dharmanand (1975) [1956]. An introduction to the study of Indian history (Second ed.). Mumbai, India: Popular Prakashan. p. 1.
  23. ^
  24. "Not a question of bias". 17 – Issue 05. Frontline. 4–17 March 2000 . Retrieved 23 June 2009 .
  25. ^
  26. Pollock, Sheldon (26 July 2008). "Towards a Political Philology" (PDF) . Economic & Political Weekly. Archived (PDF) from the original on 12 September 2018 . Retrieved 19 December 2017 .
  27. ^
  28. Sreedharan, E. (2004). A Textbook of Historiography: 500 BC to AD 2000. Orient Blackswan. ISBN978-81-250-2657-0 .
  29. ^
  30. "D.D. Kosambi festival from February 5". The Hindu. 20 January 2011. ISSN0971-751X . Retrieved 15 December 2017 .
  31. ^
  32. Habib, Irfan (2007). Essays in Indian History (Seventh reprint). Tulika. p. 381 (at p 109). ISBN978-81-85229-00-3 .
  33. ^
  34. Padgaonkar, Dileep (8 February 2013). "Kosambi's uplifting idea Of India". Times of India Blog. Archived from the original on 15 December 2017 . Retrieved 15 December 2017 . Both were pious — his mother a Hindu, his father a Buddhist — while he himself remained an atheist.
  35. ^
  36. Vaidya, Abhay (11 December 2008). "Finally, a stamp in DD Kosambi's honour". Syndication DNA. Archived from the original on 4 March 2016 . Retrieved 15 December 2017 .
  37. ^
  38. "Stamps 2008". Indian Postage Stamps. Ministry of Communication, Government of India. Archived from the original on 10 April 2017 . Retrieved 15 December 2017 .

Bibliography Edit

  • V. V. Gokhale (1974) [1958]. "Damodar Dharmanand Kosambi". In R. S. Sharma (ed.). Indian Society: Historical Probings in memory of D. D. Kosambi. Indian Council of Historical Research / People's Publishing House. ISBN978-81-7007-176-1 .

A collection entitled "Science, Society And Peace" of Prof DD Kosambi's essays has been published in the 1980's [exact year to be mentioned. ] by Academy of Political & Social Studies, Akshay, 216, Narayan Peth,Pune 411030. Republished by People's Publishing House, New Delhi in 1995]


The post is about parameter estimation based on gradient descent specifically for the differential equation , i.e. we are looking for a good that best fits the experimental data. We will go through the methods of choosing initial parameter guesses and running the optimization. This post is more fully explained in this note: Differential Equation Parameter Estimations. The data used here is a fraction of the sample data I was given. The codes and sample data can also be found here .

Consider this link instead Parameter Estimation for Differential Equations using Scipy Least Square since it utilizes more robust optimization component from scipy.

Place fitting.py, fitting_post_processing.py and param_estimate.py (the codes are at the end of the post) in the project folder. Some required python packages include matplotlib, scipy and kero do pip install them.

  1. In the command line, cd into the directory where the project is.
  2. Preliminary. Prepare sample_data.csv containing data points (t,n). First column is for t (horizontal axis), second column is n (vertical axis). Set the following toggles in fitting.py.
    plot_dataTrue
    do_initial_testingFalse
    start_opFalse

Set collection_of_p_init. You can place as many guesses as you want. In this example,

we try using 3 different initial parameters, where each of them is of the format [G,k1,k2,k3], corresponding to the 4 parameters that we want to fit. Then, run the command: python fitting.py. The results are shown in figure 2. It can be seen that the first and third guesses are closer to the experimental data we might want to use the parameters near these values for random initiation in the optimization stage (but for demonstration sake, we will not do it here).

Finally, we want to see what is the best parameter p. When viewing the optimization result, the parameters will be printed. We are more or less done!

What else to do?

In this post I have only run a few iterations over a few number of random parameter initializations. To obtain the best results, we should run them over larger iterations and try more random initial parameters.

Figure 1. A plot of n-t from the sample data is in blue. The orange data is in

Figure 2. The curves show 3 different plots from 3 different guesses of initial parameters. The red curve is the experimental data. The console shows recommended setting for learning_rate variable.

Figure 3. As the algorithm is iterated, the curve goes from red to blue. The figures generally show that the curve approaches the experimental data and that MSE trend decreases along the iteration of i.


Complete the problem sets:

This is one of over 2,400 courses on OCW. Explore materials for this course in the pages linked along the left.

MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.

No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.

Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.

Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)


Parameter Estimation for Differential Equations using Scipy Least Square

The post is about parameter estimation based on gradient descent specifically for the differential equation , i.e. we are looking for a good that best fits the experimental data. We will go through the methods of choosing initial parameter guesses and running the optimization. This post is more fully explained in this note: Differential Equation Parameter Estimations. The data used here is a fraction of the sample data I was given. The codes and sample data can also be found here.

As usual, do use virtual environment for cleaner package management (you can see here). The packages needed can be installed via pip.

Put the scripts scipyode_header.py, cubicODE.py and cubicODE_post_processing.py into the project folder – in the virtual environment if you are using one. The code can be found at the end of the post.

    PART 1. Loading data. In scipyode_header.py, customize load_data() as you see fit. As it is, it will load the data starting from a maximum point, and assume that the csv file contains 2 columns: column 1 is for variable t, column 2 is n. The file does not have any header.
    Go into cubicODE.py, set the following toggles.

plot_data True
initial_guess_mode False
optimize_mode False

The optimized parameters seem to produce good fitting. We are done!

Figure 1.Green plot shows the experimental data points while the rest are generated using RK45 using 3 different initial guesses, given by collection_of_initial_guesses = [ [0,1e-3,0.5e-3,1.2e-7], [0,1e-2,0,1e-7], [0,0,1.5e-3,1e-7] ].

Figure 2. (A, B, C) shows the results of parameter estimation from 3 different initial guesses. (D) shows the results in the console. For each serial number, the optimized parameters are printed as [G, k1, k2, k3].


1 Answer 1

You have an initial point $(a,b)$ with $y(a)=b$ inside the open domain $D_f$ .

As a first step you fit a cylinder with radii $r_a,r_b$ inside $D_f$ , $ C=<(x,y):

|y-b|le r_b>subset D_f. $ This gives $hle r_a$ .

This cylinder is compact, so $f$ will have a maximum $M$ and a local Lipschitz constant $L$ on $C$ . The, as of now hypothetical, solutions of the ODE IVP will have Lipschitz constant $M$ , so that $|y(t)-b|le M,|x-a|$ as long as $y$ stays in $C$ . For the fixed-point theorem it is required that the solution stays in $C$ for $xin[a-h,a+h]$ . So we require $ Mhle r_b $ or $hle r_b/M$ .

Finally the straight-forward application of the fixed-point theorem in the supremum-norm on the function space requires that $ Lhle q<1. $ With the consideration of a un-fixed $q$ one can find partial solutions for any $q<1$ and $h=q/L$ that extend each other for rising $q$ . Thus the only true restriction here is $hle 1/L$ . In total this gives $ h=min(r_a,r_b/M,1/L). $

Using the exponentially modified supremum-norm $ |y|_L=sup_e^<-2L|x-a|>|y(x)| $ in the Banach fixed-point theorem gives a contraction factor $frac12$ independent of the interval length, so that only the first two restrictions on $h$ apply, allowing to chose $ h=min(r_a,r_b/M). $

Different combinations of the initial radii give different bound (and Lipschitz constant) for $f$ . Thus it may happen that reducing the initial $r_a$ and $r_b$ results in a larger value for $h$ .


TEMPORALLY HOMOGENEOUS BIRTH-DEATH MARKOV PROCESSES

Daniel T. Gillespie , in Markov Processes , 1992

6.3.D The Payroll Process

Consider a business organization in which the hiring and leaving of employees is governed by the following probabilistic rules: There exists two positive constants h and l such that, if n is the number of employees on the company payroll at time t, then

Against this background, we define the payroll process to be

So by Eqs. (6.1-13) , the associated functions a(n) and v(n) are

And by the definition (6.1-9) , the functions w±(n) are

Comparing Eqs. (6.3-51) with Eqs. (6.3-2) shows that if l = 0 then the payroll process reduces to the Poisson process with a = h. And comparing Eqs. (6.3-51) with Eqs. (6.3-14) shows that if h = 0 then the payroll process reduces to the radioactive decay process with τ * = l −1 . These observations suggest that h −1 can be physically interpreted as the average time between hirings, and l −1 as the average individual employment time.

Substituting the above formulas for the stepping functions into the birth-death forward master equation (6.1-17) gives

With effort, one can obtain an explicit analytical solution to this differential-difference equation for the required initial condition P ( n , t 0 | n 0 , t 0 ) = δ ( n , n 0 ) , but the algebraic structure of that solution is so complicated that it is of little practical use. † In Subsection 6.4 A , we shall undertake a direct calculation of the asymptotic solution P(n,∞ | n0,t0). For now, we shall content ourselves with calculating only the mean, variance and covariance of X(t). But we should note that, since the functions v(n) and a(n) in Eqs. (6.3-52) are both linear in n, then the time-evolution equations (6.1-26) for the moments 〈 X k (t)〉 will be closed, and hence solvable exactly for all k≥1.

To calculate the mean of X(t), we begin by substituting into Eq. (6.1-29) the v(n) function in Eqs. (6.3-52) : d d t 〈 X ( t ) 〉 = 〈 h − l X ( t ) 〉 = − l 〈 X ( t ) 〉 + h . This equation is seen to be of the form ( A-11a ), so by Eq. (A-11b ) its solution for the initial condition (6.1-32) is 〈 X ( t ) 〉 = e − l ( t − t 0 ) < n 0 + ∫ t 0 t h e l ( t ′ − t 0 ) d t ′ >= e − l ( t − t 0 ) < n 0 + h [ e l ( t − t 0 ) − 1 l ] >, or

To calculate the variance of X(t), we begin by substituting into Eq. (6.1-30) the v(n) and a(n) functions in Eqs. (6.3-52) . That gives d d t var < X ( t ) >= 2 ( 〈 X ( t ) [ h − l X ( t ) ] 〉 − 〈 X ( t ) 〈 h − l X ( t ) 〉 〉 ) + 〈 h + l X ( t ) 〉 = − 2 l var < X ( t ) + h + l 〈 X ( t ) 〉 >= − 2 l var < X ( t ) >+ 2 h + ( l n 0 − h ) e − l ( t − t 0 ) , where the last step has invoked the result (6.3-55) . Again, this differential equation is of the form ( A-11a ), so by Eq. (A-11b ) its solution subject to the null initial condition (6.1-33) is var < X ( t ) >= e − 2 l ( t − t 0 ) ∫ t 0 t [ 2 h + ( l n 0 − h ) e − l ( t ′ − t 0 ) ] e 2 l ( t ′ − t 0 ) d t ′ = e − 2 l ( t − t 0 ) 2 h [ e 2 l ( t − t 0 ) − 1 2 l ] + e − 2 l ( t − t 0 ) ( l n 0 − h ) [ e l ( t − t 0 ) − 1 l ] or

It follows from Eqs. (6.3-55) and (6.3-56) that

Thus, the asymptotic fluctuations of X(t) about its mean will be of the order of the square root of that mean.

To calculate the covariance of X(t), we begin by substituting into Eq. (6.1-31) the formula for v(n) in Eqs. (6.3-52) : d d t 2 cov < X ( t 1 ) , X ( t 2 ) >= 〈 X ( t 1 ) ⌊ h − l X ( t 2 ) ⌋ 〉 − 〈 X ( t 1 ) 〉 〈 h − l X ( t 2 ) 〉 = − l cov < X ( t 1 ) , X ( t 2 ) >. The solution to this differential equation for the initial condition (6.1-34) is clearly cov < X ( t 1 ) , X ( t 2 ) >= var < X ( t 1 ) >e − l ( t 2 − t 1 ) , or, by invoking the result (6.3-56) ,

Notice that this result implies that

This says that the average employment time l −1 is also the “asymptotic decorrelation time” for the process X(t).

Now let us assume that every employee on the company payroll earns wages at the same rate. Then by adopting a monetary unit in which that common wage rate is unity, we can interpret the time-integral S(t) of the payroll process X(t) according to

It would obviously be useful for the administrators of the company to be able to estimate this process S(t). We shall content ourselves here with calculating its mean and variance. For the mean, we substitute our result (6.3-55) for 〈 X(t)〉 into the general formula (6.1-35) to get 〈 S ( t ) 〉 = ∫ t 0 t 〈 S ( t ′ ) 〉 d t = ∫ t 0 t [ h l + ( n 0 − h l ) e − l ( t ′ − t 0 ) ] d t ′ . This integral is easily evaluated, with the result

To calculate the variance of S(t), we must evaluate the integral (6.1-36) , wherein the integrand is the solution of the differential equation (6.1-37a) for the initial condition (6.1-37b) . That differential equation reads, for the v(n) function in Eqs. (6.3-52) , d d t cov < S ( t ) , X ( t ) >= var < X ( t ) >+ 〈 S ( t ) ⌊ h − l X ( t ) ⌋ 〉 − 〈 S ( t ) 〉 〈 h − l X ( t ) 〉 = − l cov < S ( t ) , X ( t ) >+ var < X ( t ) >. Once again this equation is of the form ( A-11a ), so its solution for the null initial condition (6.1-37b) is, by Eq. (A-11b ), cov < S ( t ) , X ( t ) >= e − l ( t − t 0 ) ∫ t 0 t var < X ( t ′ ) >e l ( t ′ − t 0 ) d t ′ = h l e − l ( t − t 0 ) ∫ t 0 t ( e l ( t ′ − t 0 ) − 1 ) ( 1 + l n 0 h e − l ( t ′ − t 0 ) ) d t ′ , where the second line has invoked the formula (6.3-56) for var<X(t)>. This last integral can be straightforwardly evaluated, with the result

Inserting this expression into the general formula (6.1-36) and performing the integration, we conclude that

An inspection of Eqs. (6.3-61) and (6.3-62) shows that, in the long-time limit, the mean and variance of S(t) are given by

The asymptotic formula for 〈 S(t)〉 is seen, in light of the first of Eqs. (6.3-57) , to be fairly obvious: it is simply the product of the asymptotic average number of payrolled employees multiplied by the elapsed time. The asymptotic formula for var<S(t)> is not so obvious. It implies that the asymptotic fluctuations in S(t) about its mean will be of the order of (2/l) 1/2 times the square root of the mean.

Figure 6-4 shows the results of a Monte Carlo simulation of the payroll process for h = 0.2 and l = 0.005. If the unit of time is weeks, then these parameter values describe a situation in which one new employee is hired on the average every h −1 = 5 weeks, and each employee stays with the company for an average of l −1 = 200 weeks. For these parameter values, the asymptotic mean number of employees, as predicted by the first of Eqs. (6.3-57) , is h/l = 40. We have assumed in the simulation that at time t0 = 0 there were n0 = 20 employees. The simulation was carried out using the exact birth-death algorithm of Fig.6-1 . In Fig.6-4a , the solid line is the realization of the employee population, and the dashed curves show the one-standard deviation envelope 〈 X(t)〉±sdev<X(t)> predicted by Eqs. (6.3-55) and (6.3-56) . We see that, as predicted by Eqs. (6.3-57) , the employee population approaches the “equilibrium band” h/l±(h/l) 1/2 = 40±6.3 in a time of the order of l −1 = 200 weeks. In Fig.6-4b , the solid curve is the corresponding realization of the total wages paid (measured in units of the common employee weekly salary), and the dashed curves show the one-standard deviation envelope 〈 S(t)〉±sdev<S(t)> predicted by Eqs. (6.3-61) and (6.3-62) .

Figure 6-4 . A Monte Carlo simulation of the payroll process defined by the stepping functions W+(n) = h and W-(n) = ln, with h = 0.2, l = 0.005, and n0 = 20. In the physical model, an average of one new employee is added to the company payroll every 5 time units, and a typical employee stays with the company for 200 time units. In (a) the solid curve shows the realization of the process X(t), the number of employees on the payroll at time t, and the dashed curves show the one-standard deviation envelope as calculated from Eqs. (6.3-55) and (6.3-56) . In (b) the solid curve shows the realization of the time-integral process S(t), which can be regarded as the total wages paid out to time t, and the dashed curves show its one-standard deviation envelope as calculated from Eqs. (6.3-61) and (6.3-62) .

Finally, we should mention that by suitably reinterpreting the two parameters h and l, the payroll process X(t) can be regarded as the instantaneous population of X molecules in the chemical system

where the population of molecular species B is “buffered” to some effectively constant value N. To see how this comes about, let n be the number of X molecules in the system at time t. Then according to the definition ( 5.3-29 ) of the specific reaction probability rate constant, the probability of the forward reaction R1 occurring in [t,t + dt) is (c1dt)(n), and this in turn is equal to W(n)dt since reaction R1 would decrease the X molecule population by one. And the probability of the reverse reaction R2 occurring in [t,t + dt) is (c2dt)(N), and this in turn is equal to W+(n)dt since reaction R2 would increase the X molecule population by one. Thus we have W(n) = c1n and W+(n) = c2N. So, recalling Eqs. (6.3-51) , we conclude that the instantaneous X molecule population is the payroll process with


First order differential equations are differential equations which only include the derivative (dfrac). There are no higher order derivatives such as (dfrac) or (dfrac) in these equations.

Linear differential equations are ones that can be manipulated to look like this:

We'll talk about two methods for solving these beasties. First, the long, tedious cumbersome method, and then a short-cut method using "integrating factors". You want to learn about integrating factors!

Let's start with the long, tedious, cumbersome, (and did I say tedious?) method. Here are the steps you need to follow:

  1. Check that the equation is linear.
  2. Introduce two new functions, (u) and (v) of (x), and write (y = uv).
  3. Differentiate (y) using the product rule:

Example

Solve the differential equation (dfrac - dfrac = x)

Step 1: Check the equation is linear:

Let's carry on!
Steps 2,3 and 4: Sub in (y = uv) and (dfrac = u dfrac + vdfrac):

Step 5: Factorise the bits that involve (v):

Step 6: Set the part that you multiply by (v) equal to zero:

Step 7: The above equation is a separable differential equation. Solve it using separation of variables:

Step 8: Plug (u = kx) back into the equation we found at step 4. Don't forget that the term involving (v) is now zero and so it can be ignored:

Step 9: We now have another separable differential equation. Solve it for (v):

Step 10: Finally, substitute these expressions for (u) and (v) into (y = uv) to find the solution to the original equation:

Now let's try the sleek, sophisticated, efficient method using integrating factors.

Integrating Factors

Integrating factors let us translate our first order linear differential equation into a differential equation which we can solve simply by integrating, without having to go through all the kerfuffle of solving equations for (u) and (v), and then stitching them back together to give an equation for (uv).

If we have a first order linear differential equation,

Here are the steps we need to follow. There are just a couple less than for the previous method:

  • Step 1: Calculate the integrating factor (I(x) = e^).
  • Step 2: Multiply both sides of the equation by (I(x)). The left hand side of the equation will be the derivative of the product (ycdot I(x)).
  • Step 3: Integrate both sides of the new equation.
  • Step 4: Simplify.

Example

Solve the differential equation (dfrac - dfrac = x)

    Step 1: Calculate the integrating factor (I(x) = e^):

Example

Solve the differential equation (dfrac + dfrac<2y> = dfrac)

    Step 1: Calculate the integrating factor (I(x) = e^):

Example

Solve the differential equation (dfrac - dfrac<3y> = (x + 1)^3)

    Step 1: Calculate the integrating factor (I(x) = e^):

Example

Solve the differential equation (dfrac + 4xy = 4x^3)

    Step 1: Calculate the integrating factor (I(x) = e^):

Description

Calculus is the branch of mathematics that deals with the finding and properties of derivatives and integrals of functions, by methods originally based on the summation of infinitesimal differences. The two main types are differential calculus and integral calculus.

Environment

It is considered a good practice to take notes and revise what you learnt and practice it.

Audience

Learning Objectives

Familiarize yourself with Calculus topics such as Limits, Functions, Differentiability etc


Check Yourself

This is one of over 2,400 courses on OCW. Explore materials for this course in the pages linked along the left.

MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.

No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.

Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.

Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)


Watch the video: Differential Equations - Linear Models Lecture (October 2021).