Evolutionary Philosophy
  • Home
  • Worldview
    • Epistemology
    • Metaphysics
    • Logic
    • Ethics
    • Politics
    • Aesthetics
  • Applied
    • Know Thyself
    • 10 Tenets
    • Survival of the Fittest Philosophers >
      • Ancient Philosophy (Pre 450 CE)
      • Medieval Philosophy (450-1600 CE)
      • Modern Philosophy (1600-1920 CE)
      • Contemporary Philosophy (Post 1920 CE)
    • 100 Thought Experiments
    • Elsewhere
  • Fiction
    • Draining the Swamp >
      • Further Q&A
    • Short Stories
    • The Vitanauts
  • Blog
  • Store
  • About
    • Purpose
    • My Evolution
    • Evolution 101
    • Philosophy 101

The Bayesian Balance

12/13/2023

1 Comment

 
Picture

​Hi all! I'm pleased to announce a new publication that has just been released. In the latest issue of Skeptic magazine, I have a piece co-authored with Zafir Ivanov. Zafir participates in the Evolutionary Philosophy Circle with me and during our last generation of activity talking about mental immunity, he proposed that we work on something together. The result, after a few months of hard work, is the article below. The final version is behind a paywall for subscribers to Skeptic, but this draft copy is close enough. Enjoy! And let me know in the comments below if this new thinking tool seems helpful to you.

The Bayesian Balance

How a Tool for Bayesian Thinking Can Guide Us Between Relativism and the Truth Trap
BY ED GIBNEY AND ZAFIR IVANOV


​On October 17, 2005 the talk show host and comedian Stephen Colbert introduced the word “truthiness” in the premier episode of his show The Colbert Report:[1] “We’re not talking about truth, we're talking about something that seems like truth—the truth we want to exist."[2] Since then the word has become entrenched in our everyday vocabulary but we’ve largely lost Colbert’s satirical critique of “living in a post-truth world.” Truthiness has become our truth. Kellyanne Conway opened the door to “alternative facts”[3] while Oprah Winfrey exhorted you to “speak your truth.”[4] And the co-founder of Skeptic magazine, Michael Shermer, has begun to regularly talk to his podcast guests about objective external truths and subjective internal truths, inside of which are historical truths, political truths, religious truths, literary truths, mythical truths, scientific truths, empirical truths, narrative truths, and cultural truths.[5] It is an often-heard complaint to say that we live in a post-truth world, but what we really have is far too many claims for it. Instead, we propose that the vital search for truth is actually best continued when we drop our assertions that we have something like an absolute Truth with a capital T.
 
Why is that? Consider a friend one of us has who is a young-Earth creationist. He believes the Bible is inerrant. He is convinced that every word it contains, including the six-day creation story of the Universe, is Truth (spelled with a capital T because it is unquestionably, eternally true). From this position, he has rejected evidence brought to him from multiple disciplines that all converge on a much older Earth and universe. He has rejected evidence from fields such as biology, paleontology, astronomy, glaciology, and archaeology, all of which should reduce his confidence in the claim that the formation of the Earth and every living thing on it, together with the creation of the sun, moon, and stars, all took place in literally six Earth days. Even when it was pointed out to him that the first chapter of Genesis mentions liquid water, light, and every kind of vegetation before there was a sun or any kind of star whatsoever, he claimed not to see a problem. His reply to such doubts is to simply say, “with God, all things are possible.”[6]
 
Lacking any uncertainty about the claim that “the Bible is Truth,” this creationist has only been able to conclude two things when faced with tough questions: (1) we are interpreting the Bible incorrectly, or (2) the evidence that appears to undermine a six-day creation is being interpreted incorrectly. These are inappropriately skeptical responses, but they are the only options left to someone who has decided beforehand that their belief is Truth. And, importantly, we have to admit that this observation could be turned back on us too. As soon as we become absolutely certain about a belief—as soon as we start calling something a capital “T” Truth—then we too become resistant to any evidence that could be interpreted as challenging it. Afterall, we are not absolutely certain that the account in Genesis is false. Instead, we simply consider it very, very unlikely, given all of the evidence at hand. We must keep in mind that we sample a tiny sliver of reality, with limited senses that only have access to a few of possibly many dimensions, in but one of quite likely multiple universes. Given this situation, intellectual humility is required. 
 
To help us examine all of this more precisely, some history and definitions from philosophy are useful at this point, particularly from the field of epistemology, which studies what knowledge is or can be. A common starting point there is with Plato’s definition of knowledge as justified true belief (JTB).[7] According to this JTB formulation, all three of those components are necessary for our notions or ideas to rise to the level of being accepted as genuine knowledge as opposed to just being dismissible as mere opinion. And in an effort to make this distinction clear, definitions for all three of these components have been developed over the ensuing millennia as well. For epistemologists, beliefs are “what we take to be the case or regard as true.”[8] For a belief to be true, it doesn’t just need to seem correct now, “most philosophers add the further constraint that a proposition never changes its truth-value in space or time.”[9] And we can’t just stumble on these truths; our beliefs require some reason or evidence to justify them.[10]
 
Readers of Skeptic will likely be familiar with skeptical arguments from Agrippa (the problem of infinite regress[11]), David Hume (the problem of induction[12]), Rene Descartes (the problem of the evil demon[13]), and others that have chipped away at the possibility of ever attaining absolute knowledge. In 1963, however, Edmund Gettier fully upended the JTB theory of knowledge by showing, in what has come to be called “Gettier problems,”[14] that even if we were to manage to actually have a justified true belief, we may have just gotten there by a stroke of good luck. And the last 60 years of epistemology has shown that we can seemingly never be certain that we are in receipt of such good fortune.
 
This philosophical work has been an effort to identify an essential and unchanging feature of the universe—a perfectly justified truth that we can absolutely believe in and know. This Holy Grail of philosophy surely would be nice to know, but it actually makes sense that we don’t have this. Ever since Darwin demonstrated that all of life could be traced back to the simplest of origins, it has slowly become obvious that all knowledge is an evolving and changing thing as well. We don’t know what the future will reveal and even our most unquestioned assumptions could be upended if, say, we’ve actually been living in a simulation all this time, or Descartes’ evil demon really has been viciously deluding us. This is why Daniel Dennett titled one of his recent papers, “Darwin and the Overdue Demise of Essentialism.”[15]
 
So, what is to be done after this demise of our cherished notions of truth, belief, and knowledge? Hold onto them and claim them anyway, as does that creationist? No, that path leads to error and intractable conflict. Instead, we can keep our minds open and adjust and adapt to evidence as it comes in. This style of thinking has become formalized in recent years into what is termed Bayesian reasoning. Central to Bayesian reasoning is a conditional probability formula that helps us revise our beliefs to be better aligned with available evidence. The formula from which the term derives is known as Bayes’ theorem and it is used to figure out how likely something is, taking into account both what we already know and new evidence. As a demonstration, consider a disease diagnosis, derived from a paper titled, “How to train novices in Bayesian reasoning”[16]:
​

​10% of adults who participate in a study have a particular medical condition. 60% of participants with this condition will test positive for the condition. 20% of participants without the condition will also test positive. Calculate the probability of having the medical condition given a positive test result.
​

Most people, including medical students, get the answer to this type of question wrong. From the facts above, some would say the accuracy of the test is 60%. However, this evidence must be understood in the broader context of false positives and the relative rarity of the disease. To see this, simply put some actual numbers on the face of these percentages. For example, since the rate of the disease is only 10%, that would mean 10 in 100 people have the condition, and the test would correctly identify 6 of these people. But since 90 of the 100 people don’t have the condition, yet 20% of them would also receive a positive test result, that would mean 18 people would be incorrectly flagged. Therefore, 24 total people would get positive test results, but only 6 of those would actually have the disease. And that means the answer to the question is only 25%. (And, by the way, a negative result would only give you about 95% likelihood that you were in the clear. Four of the 76 negatives would actually have the disease.) 
 
Now, most usages of Bayesian reasoning won’t come with such detailed and precise statistics. We will very rarely be able to calculate the probability that a fact is correct by using known weights of positive evidence, negative evidence, false positives, and false negatives. However, now that we are aware of these factors, we can try to weigh them roughly in our minds, starting with the two core norms of Bayesian epistemology: thinking about beliefs in terms of probability andupdating one’s beliefs as conditions change.[17] We propose it may be easier to think in this Bayesian way using a modified version of a concept put forward by the philosopher Andy Norman, called Reason’s Fulcrum.[18]
 
Like Bayes, Norman asserts that our beliefs ought to change in response to reason and evidence, or as David Hume said, “a wise man proportions his belief to the evidence.”[19] These changes could be seen as the movement of the fulcrum lying under a simple lever. Picture a beam or a plank (the lever) with a balancing point (the fulcrum) somewhere in the middle, such as a playground seesaw or teeter totter. As in Figure 1, you can balance a large adult with a small child just by positioning the fulcrum closer to the adult. And if you actually know the weight of these people, then the location of that fulcrum can be calculated ahead of time because the ratio of the beam length on either side of the fulcrum is the inverse of the ratio of mass between the adult and child (e.g., a 3 times heavier person is balanced by a distance having a ratio of 1:3 units of distance). 
​

Picture

If we now move to the realm of reason, we can imagine substituting the ratio of mass between an adult and child by the ratio of how likely the evidence is to be observed between a claim and its counterclaim. Note how the term in italics captures not just the absolute quantity of evidence but the relative quality of that evidence as well. Once this is considered, then the balancing point at the fulcrum gives us our level of credence in each of our two competing claims. 
 
To see how this works for the example given above about a test for a medical condition, we start by looking at the balance point in the general population (Figure 2). Not having the disease is represented with 90 people on the left side of the lever, and having the disease is represented by 10 people on the right side. This is a ratio of 9 to 1, so to get our lever to balance we must move the fulcrum so that the length of beam on either side of the balancing point has the inverse ratio of 1 to 9. This, then, is the physical depiction of showing just a 10% likelihood in the general population of having the medical condition. There are 10 units of distance between the two populations and the fulcrum is on the far left, 1 unit away from all the negatives.
​

Picture
​
Next, we want to see the balance point after a positive result has been received (Figure 3). On the left hand side, we were told the test has a 20% false positive rate, so 18 of the 90 people stay on our giant seesaw even though they don’t actually have the condition. On the right hand side, we were told 60% of the 10 people who have the condition would test positive, so this leaves 6 people. Therefore, the new ratio after the test is 18 to 6, or 3 to 1. This means the fulcrum must be shifted to the inverse ratio of 1 to 3 in order to restore balance. There are now 4 total units of distance between the left and right, and the fulcrum is 1 unit from the left. So, after receiving a positive test result, the probability of having the condition (being in the group on the right) is 1 in 4 or 25% (the portion of beam on the left). This confirms the answer we derived earlier using abstract mathematical formulas, but many may find the concepts easier to grasp from the graphic representation.
​

Picture

​To recap, the position of the fulcrum under the beam is the balancing point of the likelihood of observing the available evidence for two competing claims. This position is called our credence. As we become aware of new evidence, our credence must move to restore a balanced position. In the example above, the average person in the population would have been right to hold a credence of 10% that they had a particular condition. And after getting a positive test, this new evidence would shift their credence, but only to a likelihood of 25%. That’s worse for the person, but actually still pretty unlikely. Of course, more relevant evidence in the future may shift the fulcrum further in one direction or another. That is the way Bayesian reasoning attempts to wisely proportion one’s credence to the evidence.
 
What about our young-Earth creationist friend? When using Bayes’ theorem, the absolute certainty he holds starts with a credence of 0% or 100% and always results in an end credence of 0% or 100%, regardless of what any possible evidence might show. To guard against this, the statistician Dennis Lindley proposed something called “Cromwell’s Rule”, based Oliver Cromwell’s famous 1650 quip: “I beseech you, in the bowels of Christ, think it possible that you may be mistaken.”[20] This rule simply states that you should never assign a probability of 0% or 100% to any proposition. Once we frame our friend’s certainty in the Truth of biblical inerrancy as setting his fulcrum to the extreme end of the beam, we get a clear model for why he is so resistant to counterevidence. Absolute certainty breaks Reason’s Fulcrum. It removes any chance for leverage to change a mind. When beliefs reach the status of “certain truth” they simply build ramps on which any future evidence effortlessly slides off.
​
Picture

​So far, this is the standard way of treating evidence in Bayesian epistemology to arrive at a credence. The lever and fulcrum depictions simply provide some concrete ways of seeing this, which may be helpful to some people. However, we also propose that this physical model might help with a common criticism of Bayesian epistemology. In the relevant academic literature on this, Bayesians are said to “hardly mention” sources of knowledge, the justification for one’s credence is “seldom discussed”, and “Bayesians have hardly opened their ‘black box’, E, of evidence.”[21] We propose to address this by first noting it should be obvious from the explanations above that not all evidence deserves to be placed directly onto the lever. In the medical diagnosis example, we were told exactly how many false negatives and false positives we could expect, but this is rarely known. Yet, if ten drunken campers over the course of a few decades each swear they saw something that looked like Bigfoot in the woods, we would treat that body of evidence differently than we would if it were nine drunken campers plus the pictures from one BBC high-definition camera trap set by a team of professional documentarians. How should we depict this difference between the quality of evidence versus the quantity of evidence?
 
We don’t have firm rules or anything like “Bayesian coefficients” for how to precisely treat all types of evidence yet, but we can take some guidance from the history of the development of the scientific method. Evidential claims can start with something very small, such as one observation under suspect conditions given by an unreliable observer. In some cases, perhaps that’s the best we’ve got for informing our credences. Such evidence might feel fragile, but who knows? The content could turn out to be robust. How do we strengthen it? Slowly, step-by-step, we progress to observations with better tools and conditions by more reliable observers. Eventually, we’re off and running with the growing list of reasons why we trust science: replication, verification, inductive hypotheses, deductive predictions, falsifiability, experimentation, theory development, peer review, social paradigms, incorporating a diversity of opinions, and broad consensus.[22]
 
We can also bracket these various knowledge-generating activities into three separate categories for theories. The simplest type of theory we have explains previous evidence. This is called retrodiction. All good theories can explain the past, but we have to beware that this is also what “just-so stories” do, as in Rudyard Kipling’s entertaining theory for how Indian rhinoceroses got their skin—cake crumbs made them so itchy they rubbed their skin until it became raw, stretched, and all folded up.[23]
 
Even better than simply explaining what we already know, good theories should make predictions. Newton’s theories predicted that a comet would appear around Christmastime in 1758. When this unusual sight appeared in the sky on Christmas day, the comet (named for Newton’s close friend Edmund Halley) was taken as very strong evidence for Newtonian physics. Theories such as this can become stronger the more they explain and predict further evidence. 
 
Finally, beyond predictive theories, there are ones that can bring forth what William Whewell called consilience.[24] Whewell coined the term scientist and he described consilience as what occurs when a theory that is designed to account for one type of phenomena turns out to also account for another completely different type of phenomena. The clearest example of this is Darwin’s theory of evolution. It accounts for biodiversity, fossil evidence, geographical population distribution, and a huge range of other mysteries that previous theories could not make sense of. And this consilience is no accident since Darwin was a student of Whewell’s and he was nervous about sharing his theory until he had made it as robust as possible.
 
Combining all of these ideas, we propose a new way (Figure 5) of sifting through the mountains of evidence the world is constantly bombarding us with. We think it is useful to consider the three different categories of theories, each dealing with different strengths of evidence, as a set of sieves by which we can first filter the data to be weighed in our minds. In this view, some types of evidence might be rather low quality, acting like a medical test with false positives near 50%. Such poor evidence goes equally on each side of the beam and never really moves the fulcrum. However, some evidence is much more likely to be reliable and can be counted on one side of the beam at a much higher rate than the other (although never with 100% certainty). And evidence that does not fit with any theory whatsoever really just ought to make us feel more skeptical about what we think we know until and unless we figure out a way to incorporate it into a new theory.


Picture

​We submit that this mental model of a Bayesian Balance allows us to adjust our credences more easily and intuitively. Also, it never tips the lever all the way over into unreasonable certainty. To use it, you don’t have to delve into the history of philosophy, epistemology, skepticism, knowledge, justified true beliefs, Bayesian inferences, or difficult calculations using probability notation and unknown coefficients. You simply need to keep weighing the evidence and paying attention to which kinds of evidence are more or less likely to count. Remember that observations can sometimes be misleading, so a good question to guide you is, “Could my evidence be observed, even if I’m wrong?” Doing so fosters a properly skeptical mindset. It frees us from the truth trap, yet enables us to move forward, wisely proportioning our credences as best as the evidence allows us.
 
References
[1] Sternbergh, A. (2006, October 16). Stephen Colbert Has America by the Ballots. New York Magazine. https://nymag.com/news/politics/22322/
[2] The Paley Center for Media. (2009, November 7). Colbert Report Writers—Truthiness and Pun Journals. https://www.youtube.com/watch?v=WvnHf3MQtAk
[3] Blake, A. (2017, January 22). Kellyanne Conway says Donald Trump’s team has ‘alternative facts.’ Which pretty much says it all. Washington Post. https://www.washingtonpost.com/news/the-fix/wp/2017/01/22/kellyanne-conway-says-donald-trumps-team-has-alternate-facts-which-pretty-much-says-it-all/
[4] Friedersdorf, C. (2018, January 8). The Difference Between Speaking ‘Your Truth’ and ‘The Truth.’ The Atlantic. https://www.theatlantic.com/politics/archive/2018/01/the-power-and-perils-of-speaking-your-truth/549968/
[5] See especially: Shermer, M. (n.d.). Jordan Peterson & Michael Shermer on Science, Myth, Truth, and the Architecture of Archetypes (174). Retrieved July 23, 2023, from https://www.skeptic.com/michael-shermer-show/jordan-peterson-beyond-order-12-more-rules-for-life/. For more examples see: Shermer, M. (n.d.). Simon Winchester—How We Transfer Knowledge Through Time (355). Retrieved July 23, 2023, from https://www.skeptic.com/michael-shermer-show/simon-winchester-transmission-of-knowledge-from-ancient-widsom-to-modern-magic/. Shermer, M. (n.d.). The Sacred Depths of Nature—Ursula Goodenough on How to Find Sacred Scientific Spirituality (336). Retrieved July 23, 2023, from https://www.skeptic.com/michael-shermer-show/natures-sacred-depths-ursula-goodenough-on-finding-sacred-scientific-spirituality/. Shermer, M. (n.d.). Gale Sinatra & Barbara Hofer—Science Denial: Why It Happens and What to Do About It (212). Retrieved July 23, 2023, from https://www.skeptic.com/michael-shermer-show/science-denial-why-it-happens-and-what-to-do-about-it-gale-sinatra-barbara-hofer/. Shermer, M. (n.d.). Richard Dawkins on evangelizing for evolution, science, skepticism, philosophy, reason, and rationality, based on his new book Books Do Furnish a Life: Reading and Writing Science (205). Retrieved July 23, 2023, from https://www.skeptic.com/michael-shermer-show/books-do-furnish-a-life-reading-and-writing-science-richard-dawkins/. Shermer, M. (n.d.). Jonathan Rauch—The Constitution of Knowledge: A Defense of Truth (190). Retrieved July 23, 2023, from https://www.skeptic.com/michael-shermer-show/jonathan-rauch-constitution-of-knowledge-a-defense-of-truth/. Shermer, M. (n.d.). Robert Pennock—An Instinct for Truth: Curiosity and the Moral Character of Science (98). Retrieved July 23, 2023, from https://www.skeptic.com/michael-shermer-show/robert-pennock-an-instinct-for-truth-curiosity-moral-character-of-science/. Shermer, M. (2020, June 26). What is Truth, Anyway? [YouTube]. https://www.skeptic.com/skepticism-101/what-is-truth-anyway-lecture/.
[6] Holy Bible, New International Version, Matthew 19:26. (n.d.). Retrieved July 23, 2023, from https://www.biblegateway.com/passage/?search=Matthew%2019%3A26&version=NIV
[7] Ichikawa, J. J., & Steup, M. (2018). The Analysis of Knowledge. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2018). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2018/entries/knowledge-analysis/
[8] Schwitzgebel, E. (2021). Belief. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2021). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2021/entries/belief/
[9] Dowden, B., & Swartz, N. (n.d.). Truth. Internet Encyclopedia of Philosophy. Retrieved July 23, 2023, from https://iep.utm.edu/truth/
[10] Hasan, A., & Fumerton, R. (2022). Foundationalist Theories of Epistemic Justification. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Fall 2022). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2022/entries/justep-foundational/
[11] Laertius, D. (1925). Lives of Eminent Philosophers: Vol. Book IX (R. D. Hicks, Ed.). http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0258%3Abook%3D9
[12] Hume, D. (1902). An Enquiry Concerning Human Understanding (L. A. Selby-Bigge, Ed.; Second). https://www.gutenberg.org/cache/epub/9662/pg9662.txt
[13] Gillespie, M. A. (1996). Chapter One: Descartes and the Deceiver God. In Nihilism Before Nietzsche. University of Chicago Press.
[14] Hetherington, S. (n.d.). Gettier Problems. Internet Encyclopedia of Philosophy. Retrieved July 23, 2023, from https://iep.utm.edu/gettier/
[15] Dennett, D. C. (2016). Darwin and the Overdue Demise of Essentialism. In D. L. Smith (Ed.), How Biology Shapes Philosophy: New Foundations for Naturalism (pp. 9–22). Cambridge University Press. https://doi.org/10.1017/9781107295490.002
[16] Büchter, T., Eichler, A., Steib, N., Binder, K., Böcherer-Linder, K., Krauss, S., & Vogel, M. (2022). How to Train Novices in Bayesian Reasoning. Mathematics, 10(9), 1558. https://doi.org/10.3390/math10091558
[17] Lin, H. (2022). Bayesian Epistemology. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Fall 2022). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2022/entries/epistemology-bayesian/
[18] Norman, A. (2021). Mental Immunity: Infectious Ideas, Mind-Parasites, and the Search for a Better Way to Think. Harper Wave. https://www.harperacademic.com/book/9780063003002/mental-immunity
[19] Hume, D. (1902). An Enquiry Concerning Human Understanding (L. A. Selby-Bigge, Ed.; Second). https://www.gutenberg.org/cache/epub/9662/pg9662.txt
[20] Jackman, S. (2009). The Foundations of Bayesian Inference. In Bayesian Analysis for the Social Sciences. John Wiley & Sons.
[21] Hajek, A., & Lin, H. (2017). A Tale of Two Epistemologies? Res Philosophica, 94(2), 207–232. http://dx.doi.org/10.11612/resphil.1540
[22] Oreskes, N. (2019). Why Trust Science? Princeton University Press. https://press.princeton.edu/books/hardcover/9780691179001/why-trust-science
[23] Kipling, R. (1902). Just So Stories (D. Reed & D. Widger, Eds.). https://www.gutenberg.org/files/2781/2781-h/2781-h.htm
[24] Whewell, W. (1847). The philosophy of the inductive sciences, founded upon their history. London J.W. Parker. http://archive.org/details/philosophyinduc00goog

Zafir Ivanov
Zafir has had a lifelong interest in how we form beliefs and why many people resist counterevidence. This interest resulted in becoming familiar with research literature, experimenting with difficult conversations and amateur ethnography.
 
Ed Gibney
Ed writes fiction and philosophy while trying to bring an evolutionary perspective to both of those pursuits. His work can be found at evphil.com.

1 Comment

    Subscribe to Help Shape This Evolution

    SUBSCRIBE

    Blog Philosophy

    This is where ideas mate to form new and better ones. Please share yours respectfully...or they will suffer the fate of extinction!


    Archives

    February 2025
    August 2024
    July 2024
    June 2024
    April 2024
    December 2023
    November 2023
    October 2023
    September 2023
    January 2023
    August 2022
    July 2022
    June 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    August 2021
    June 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    May 2019
    March 2019
    December 2018
    July 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    April 2012

Powered by Create your own unique website with customizable templates.