Monday, March 27, 2017

MH370 Debris Finds Revisited (March 2017)

Some time ago I did a Weibull prediction related to the rate of MH370 debris finds. It was quite predictable up to the 13 month period from the flaperon find. Now things have tailed off considerably. No recent confirmed finds have been made. Not sure if this is because there is no more debris left to be found or if no one is actively engaged in looking for it.

original Weibull

Updated Weibull plot below. The last red dot is imaginary, and represents how the data would look if a new piece of debris were found today (some 20 months after the flaperon find). Obviously, the dot is well off of the predicted Weibull debris recovery curve based on the rate of debris recovery for the first year or so after the flaperon find.





























If the debris found on 01-28-17 in South Africa near East London is included as valid MH370 debris the CDF would appear as below.  The last red dot is now "real" but the assumed debris find a couple of months (end of January 2017) earlier than the "imaginary" find (end of March 2017) above makes little difference in the conclusion.





























If the above find and the "Morkel find" in December of 2016 are both included, the CDF is as below.


Thursday, March 23, 2017

Must Read - "Irresistible" by Adam Alter

Results below from 8000 smartphone users using the "Moment" application which records screen time. "Moment" does not record the time spent listening to music or making phone calls. The data below is actually biased low since the people who downloaded the "Moment" application were concerned about spending too much time playing with their smart phones. Most of the real addicts don't have a clue.




Tuesday, March 21, 2017

Language Switch Matrix

Great links:

To article describing matrix below:

language x to language y

Google was worth $25B when they went public in 2004.

$25B Eigenvector

G



Monday, March 13, 2017

Another Way to Look at Things?

Don't really know the answer to the post title, but I have been absorbed by a conceptualization related to BTO and Doppler compensation. Doppler compensation refers to the frequency offset put on the L band carrier by the AES. This compensation only uses velocity components in the local tangent plane. No adjustments are made for vertical components of velocity. Also this compensation is performed by assuming that the Inmarsat satellite is stationary at its nominal position over the equator.

Consider, for example, the BTO values at 19:41 and 20:41 which are 11500 usec and 11740 usec respectively. The difference being 240 usec. The path difference associate with this time difference is 240usec * C/2 = 35975 meters (where C is the speed of light). The average Doppler shift between 19:41 and 20:41 must accumulate 35975 meters of L band cycles. A cycle or wavelength of the L band frequency is 0.182 meters. So the number of cycles is 35975m / 0.182m = 197665 cycles. These cycles are accumulated over a time period of one hour.  The associated frequency is 197665 cycles / 3600 seconds = 55 Hz. So the average Doppler needed is 55 Hz.

If one plots the Doppler compensation for various trial paths it is clear that it is very linear. Meaning that the average of the Doppler compensation at 19:41 and 20:41 is a reasonable estimate of the Doppler average over the path.  My suggested flight path scenario via the Cocos has an average Doppler compensation  in the vicinity of 35Hz between 19:41 and 20:41.  Likewise with the McMurdo path recently suggested by Ianello and Godfrey. This deviation seems too large relative to the expected 55 Hz.

So I asked DrB for his most recent path parameters at 19:41 and 20:41. The reply was:

19:41 =>  0.018N  93.719E  452.7 knots
20:41 => 7.540S  93.191E  452.1 knots

DrB warned me to be careful about determining a heading since the heading is variable and effected by several different parameters. So I did the best I could, and estimated the heading at 19:41 at 183 and the heading at 20:41 at 182. I think these values are close (DrB can refine them for himself).

So, using the above values one computes the Doppler compensations as:

19:41 = -37.7 Hz
20:41 = 143.6 Hz

The average Doppler is 53Hz which is amazingly close to the required value. Much closer in fact than any path I had examined prior to DrB's path, and well within the errors associated with the BTO measurements. I should also point out that the average Doppler computed as above is extremely sensitive to position, speed, and heading. Others will agree when they try the method for themselves.

So, my conclusion is that the 19:41 and 20:41 points derived by DrB seem to pass this test.

Someone will say that using the nominal satellite position is not correct. In fact, I believe it is the right thing to do. This notion needs more work. It might be that for small changes in BTO that the actual satellite position is an important consideration.

I would encourage others reading this post to try this method for themselves. I think the conclusion will be that it is a very very sensitive test of path parameters. Of course, I could be very wrong. Turn on the flame throwers. I welcome aggressive feedback.

I do expect something to be incorrect about the approach presented above. I simply do not see what it is.

Edit 3/14/17

My suspicion is that for small BTO differences that the contribution of satellite motion to the BTO is not insignificant.


Thursday, March 9, 2017

Rodent Proofing a Generator

So, living in the boonies means that the grid may be down for weeks of the year. You must have generator backup. The recent storm in Stonyford had power out for several days. I fired up my expensive Honda generator and it did not work. Had to go to the backup.

Took me a few days to figure out that the Honda had been invaded by rodents. Nests built inside and wires chewed. Took me the better part of a day to bring it back on line. So, I conceived of a rodent proof structure for the Honda.





MH370 - Debris Weibull

I will say at the start that I am not a huge fan of the Weibull Distribution. It has a steep learning curve, and is the weapon of choice for reliability engineers (with whom I have had a very poor relationship for most of my career). Of course, the reliability engineers buy expensive third party versions to use since there is no possibility that reliability engineers would be able to utilize the distribution directly.

The Weibull Distribution has the disadvantage that it is completely unrelated to any form of underlying causality. It is simply a way to fit a math model to data gathered, and makes no attempt to understand the relationship of the gathered data to anything else but time. You can think of it as simply an elegant curve fit. I have no idea why it seems to work as well as it does.

Having said all that, my experience with Weibull has been excellent. It seems to be rather uncanny in its ability to forecast the future based on the past. I have no explanation. There are both three parameter Weibull implementations and two parameter implementations. The latter being the most popular, and the one used here. The parameters being "shape" and "scale" extracted using linear regression techniques applied to the existing data samples.

In this model the sampled data consists of the three confirmed pieces of MH370 debris and the seven almost certain pieces of MH370 debris. The best fit Weibull plot for this data is shown below.






























The red dots are the ten debris finds over the 19 months since the flaperon was found at the end of July in 2015. Weibull suggests that the 10 pieces of debris found represent about 15% of all the debris of that type that is going to be found (debris total will be between 60 and 70 pieces).  Weibull predicts that 80% of the debris (50 pieces or so) will be found in 100 months (8 years or so) after the flaperon find. It is interesting that the Weibull distribution "adjusted itself" to the earlier Poisson distribution (elsewhere in this blog). Poisson predicts that there is a 30% chance of finding one piece of qualified debris in any given month, and a 10% chance of two finds. Well, in 100 months you would expect 10 months to produce 20 pieces and the remaining 90 months to produce 27 pieces for a total of 47 pieces in 100 months.

Of course, the distribution has no way of knowing if the search for debris will be escalated or reduced. The distribution has no way of knowing if the debris itself is perishable (sinking or otherwise becoming unavailable). It simply says that the most likely result based on the history of previous finds is the result indicated above. Likewise with the rate predictions of Poisson.

Over the next few years Weibull suggests the debris finds will continue to occur at approximately the same rate as the debris found so far. Of course, with Blaine Gibson out of the loop it may be that the debris finds will taper off significantly.

Monday, March 6, 2017

Two Envelope Problem

The problem statement from Wiki:

Two Envelopes Problem

At the start you obviously have an expected value of 1/2*(2A) + 1/2*(A). Your expected value is 3/2*A.  Where 2A and A are the amounts in the envelopes you can chose.

So you make a choice. The envelope you select has an unknown value of X, and you are given the option to switch envelopes. The other envelope contains either 2X or X/2 which has an expected value of 1/2*(2X) + 1/2*(X/2) = 5/4*X. Therein lies the problem with expected value theory. It makes no sense to switch envelopes despite the higher expected value.

Your choice of an envelope does not change the original expected value of 3/2*A. Applying a second expected value calculation to a problem statement in which the initial conditions have not changed is simply wrong i.e. selecting an envelope has no effect on the expected value of its contents or the expected value of the contents of the envelop which was not selected.

Gell-Mann and Peters are exploring expected value theory from the ground up. It is extremely fragile, and time based observations are challenging it.

A simulation was run in which 6 sets of 50 trials were performed. In three of the sets the person kept the initial envelope selected. In three of the sets the person switched envelopes. No benefit was derived from switching envelopes. In 50 trials the expected value would be 50 * 3/2 = 75 if the value of A is set to one unit.


Saturday, March 4, 2017

Bertrand Paradox

The Bertrand paradox is a problem within the classical interpretation of probability theoryJoseph Bertrand introduced it in his work Calcul des probabilités(1889)[1] as an example to show that probabilities may not be well defined if the mechanism or method that produces the random variable is not clearly defined. The link below has to be read to understand the paradox.

The Bertrand paradox goes as follows: Consider an equilateral triangle inscribed in a circle. Suppose a chord of the circle is chosen at random. What is the probability that the chord is longer than a side of the triangle?

Bertrand Paradox - Wiki

In his 2007 paper, "Bertrand’s Paradox and the Principle of Indifference",[7] Nicholas Shackel affirms that after more than a century the paradox remains unresolved, and continues to stand in refutation of the principle of indifference. Also, in his 2013 paper, "Bertrand’s paradox revisited: Why Bertrand’s ‘solutions’ are all inapplicable",[8] Darrell P. Rowbottom shows that Bertrand’s proposed solutions are all inapplicable to his own question, so that the paradox would be much harder to solve than previously anticipated.

The paradox has been investigated for more than a century. I offer the following simple solution. Drop a circle randomly on a plane containing a line - or alternatively drop a line on a plane containing a circle. If the circle contacts the line a chord is created and its length tested. If the circle does not contact the line no chord is created and this trial is discarded.

It is easy to see that when the circle contacts the line, it can fall anywhere on a diameter of the circle perpendicular to the line with equal probability. In the figure below the line extends to infinity in both directions.




























It is known that the side of an inscribed equilateral triangle bisects the the radius of the circle when the radius is drawn perpendicular to the side of the inscribed equilateral triangle. Therefore 1/2 the random events described above (all positions on the diameter above are equally likely) will result in a chord less than the length of the side of an inscribed equilateral triangle i.e. exactly one half the diameter crossings result in a chord less than the length of the side of an inscribed equilateral triangle. In the figure above chords that intersect the highlighted red "half diameter" will be longer than the side of an inscribed equilateral triangle.

This method of selecting the chord is truly random and abides by the principle of "maximum ignorance" (as well as scale and translation invariance).

Q.E.D.

Pathetic

As the third anniversary of the disappearance of MH370 approaches it is time to reflect, as others are doing, on the search effort. What should have been done differently and what should not have been done at all.

Let's start with the flow of information, or rather the lack of information, from the searchers. We have been denied numerous details regarding the flight for no apparent reason. The most significant of these details in my view are:

1> Radar data - there is absolutely no reason this data has not been placed in the public domain.

2> Flight data - the data from the 20 previous flights of 9M-MRO has been withheld. Again there is no apparent reason why. Certainly it was available to the DSTG and others in the official search group.

3> The cell phone registration near Penang was withheld until the RMP Report was leaked. Why?

4> The information regarding Shah's simulator was withheld until the RMP Report leak. Why?

5> Various internal reports. In particular:

[7]  “Internal study regarding SATCOM ground-station logs,” MH370 Flight Path Reconstruction Group - SATCOM Subgroup. 

The above reference is cited numerous times by Holland in his bfo paper. Yet it is not available for review.

1>, 2> and 5> of the above have been requested from the ATSB. None have been provided. Why?

The LOR at 18:25 remains a mystery. No one has explained why this event occurred or the details of the event itself. Holland references several other observed logons in his paper:

"The Use of Burst Frequency Offsets in the Search for MH370".

None replicated the 18:25 event. Our resident SATCOM experts have explained their take on the 18:25 event. All speculation. The question is why did the SSWG/DSTG not gather up several of the SDU's used in 9M-MRO, and run hundreds of logons at various temperatures and "off" periods. This equipment is readily available. Why screw around with simply looking at a few events logged previously none of which mirror the 18:25 event? Total forensic incompetence in my view.

Basically the analytics and information flow associated with the search for MH370 has been truly questionable. While I certainly admire the analytical skills of many of the IG people and several others outside the IG, I strongly question their judgement. They have to know that the ISAT data is not capable of predicting a terminal location, yet they keep grinding away at it. There are legitimate papers supporting terminal locations from 26S to 40S. What is the point? Who is going to act on this information? Why keep doing this? It makes no sense whatever. As I have stated many many times, the initiation of an underwater search based on the SSWG analytics (or anyone else's analytics) was an extremely poor decision.

Why has the ATSB refused to release the information above? The entire search effort has been and is marred by secrecy and incompetence. Why is important information still being withheld from the public domain?

The last three years have been pathetic (and very very frustrating).