diff --git a/06_gravity.Rmd b/06_gravity.Rmd index 0e63f67d..34037aec 100644 --- a/06_gravity.Rmd +++ b/06_gravity.Rmd @@ -1,6 +1,6 @@ # Escaping from Mars -```{julia chap_6_libraries, cache = TRUE, results = FALSE, echo=FALSE} +```{julia chap_6_libraries, cache = FALSE, results = FALSE, echo=FALSE} cd("./06_gravity/") import Pkg Pkg.activate(".") @@ -27,12 +27,12 @@ plot(im_0,axis=nothing,border=:none) #img_small = imresize(im_0, ratio=1/3) ``` -To simplify, we approximate the escape velocity as: +Assuming that the planet is spherical and considering only the gravitational effect of Mars, the escape velocity can be described as $v_{escape}=\sqrt{2*g_{planet}*r_{planet}}$ -where $r$ is the radius of the planet and $g$ the constant of gravity at the surface. Suppose that we remember from school that the escape velocity -from Earth is $11\frac{km}{s}$ and that the radius of Mars if half of the earth's. +where $r$ is the radius of the planet and $g$ the gravity constant at the surface of Mars. Suppose that we remember from school that the +escape velocity from Earth is $11\frac{km}{s}$ and that the radius of Mars if half of the earth's. We remember that the gravity of Earth at its surface is $9.8\frac{m}{s^2}$, so all we need to estimate the escape velocity of Mars is the gravity of the planet at its surface. So we decide to make an experiment and gather some data. But what exactly do you need to measure? Let's see. @@ -40,19 +40,20 @@ of the planet at its surface. So we decide to make an experiment and gather some We are going to calculate the constant $g_{mars}$ just throwing stones. We are going to explain a bit the equations regarding the experiment. The topic we need to revisit is Proyectile Motion. - ## Proyectile Motion - Gravity pulls us down to Earth, or in our case, to Mars. This means that we have an acceletation, since there is a force. Recalling the newton equation: +## Proyectile Motion +Gravity pulls us down to Earth, or in our case, to Mars. This means that we have an acceletation, since there is a force. Recalling the +newton equation: $\overrightarrow{F} = m * \overrightarrow{a}$ - where $m$ is the mass of the object, $\overrightarrow{F}$ is the force(it's what make us fall) and $\overrightarrow{a}$ is the acceleration, in - our case is what we call gravity $\overrightarrow{g}$. - The arrow $\overrightarrow{}$ over the letter means that the quantity has a direction in space, in our case, gravity is pointing to the center of - the Earth, or Mars. +where $m$ is the mass of the object, $\overrightarrow{F}$ is the force(it's what make us fall) and $\overrightarrow{a}$ is the acceleration, in +our case is what we call gravity $\overrightarrow{g}$. +The arrow $\overrightarrow{}$ over the letter means that the quantity has a direction in space, in our case, gravity is pointing to the center of +the Earth, or Mars. *How can we derive the motion of the stones with that equation?* -In the figure below we show a sketch of the problem: We have the 2 axis, $x$ and $y$, the $x$ normally is parallel to the ground and the $y$ axis +In the figure below we show a sketch of the problem: We have the 2 axis, $x$ and $y$, the $x$ axis is parallel to the ground and the $y$ axis is perpendicular, pointing to the sky. We also draw the initial velocity $v_0$ of the proyectile, and the angle $\theta$ with respect to the ground. Also it's important to notice that the gravity points in the opposite direction of the $y$ axis. @@ -81,14 +82,16 @@ can go. Then, the stone starts to go down. *How does the velocity evolve in this trajectory?* Since the begining, the velocity starts decreasing until it has the value of 0 at the highest point, where the stone stops for a moment, then it -changes its direction and start to increase again, pointing towards the ground. Ploting the evolution of the height of the stone, we obtain the plot shown below. We see that, at the begining the stone starts to go up fast and then it slows down. We see that for each value of $y$ there are 2 values of $t$ that satisfies the equation, thats because the stone pass twice for each point, except for the highest value of $y$. +changes its direction and start to increase again, pointing towards the ground. Ploting the evolution of the height of the stone, we obtain +the plot shown below. We see that, at the begining the stone starts to go up fast and then it slows down. We see that for each value of $y$ +there are 2 values of $t$ that satisfies the equation, thats because the stone pass twice for each point, except for the highest value of $y$. ```{julia chap_6_plot_3, echo=FALSE} im_3 = load("./06_gravity/images/img90_.png"); plot(im_3,axis=nothing,border=:none) ``` -So, in the example we ahve just explained, we have that the throwing angle is θ=90°, so sin(90°)=1, the trajectory in $y$ becomes: +So, in the example we have just explained, we have that the throwing angle is θ=90°, so sin(90°)=1, the trajectory in $y$ becomes: $y(t) = v_0*t -\frac{g*t^2}{2}$ And the velocity, which is the derivative of the above equation becomes: @@ -108,7 +111,7 @@ The experiment set up will go like this: - One person will be throwing stones with an angle. - The other person will be far, watching from some distance, measuring the time since the other throw the stone and it hits the ground. The other measurement we will need is the distance Δx the stone travelled. -- Also, for the first iteration of the experiment, suppose we only keep the measurements with and initial angle θ~45° (we will loosen this +- Also, for the first iteration of the experiment, suppose we only keep the measurements with an initial angle θ~45° (we will loosen this constrain in a bit). ```{julia chap_6_plot_4, echo=FALSE} @@ -127,14 +130,13 @@ using Turing Suppose we did the experiment and we have measured then the 5 points, Δx and Δt, shown below: ```{julia, results = FALSE} -Δx_measured = [25.94, 38.84, 52.81, 45.54, 17.24] -t_measured = [3.91, 4.57, 5.43, 4.85, 3.15] +Δx = [25.94, 38.84, 52.81, 45.54, 17.24] +Δt = [3.91, 4.57, 5.43, 4.85, 3.15] ``` *Now, how do we estimate the constant g from those points?* -Using the equations of the trajectory, when the stone hits the ground, $y(t) = 0$, since we take the start of the $y$ coordinate in the ground -(negleting the initial height with respect to the maximum height), so finding the other then the initial point that fulfill this equation, we find that: +When the stone hits the ground, we have that $y(t) = 0$. If we solve for $t$, ignoring the trivial solution $t = 0$, we find that $t_{f} = \frac{2*v_{0}*sin(θ)}{g}$ @@ -145,7 +147,7 @@ $Δx=t_{f}*v_{0}*cos(θ)$ where Δx is the distance traveled by the stone. - So, solving for $v_{0}$m, the initial velocity, an unknown quantity, we have: + So, solving for $v_{0}$, the initial velocity, an unknown quantity, we have: $v_{0}=\frac{Δx}{t_{f}cos(θ)}$ @@ -188,7 +190,7 @@ values of 0 and 10 ```{julia chap_6_plot_6} plot(Uniform(0,10),xlim=(-1,11), ylim=(0,0.2), legend=false, fill=(0, .5, :lightblue)); xlabel!("g_mars"); -ylabel!("Probability"); +ylabel!("Probability density"); title!("Uniform prior distribution for g") ``` @@ -209,21 +211,17 @@ end ```{julia,results = FALSE} iterations = 10000 -ϵ = 0.05 -τ = 10 -``` - -```{julia,results = FALSE} θ = 45 -chain_uniform = sample(gravity_uniform(t_measured, Δx_measured, θ), HMC(ϵ, τ), iterations, progress=false); +chain_uniform = sample(gravity_uniform(Δt, Δx, θ), NUTS(), iterations, progress=false); ``` -Plotting the posterior distribution for p we that the values are mostly between 2 and 5, with the maximun near 3,8. Can we narrow the values we obtain? +Plotting the posterior distribution for p we that the values are mostly between 2 and 5, with the maximun near 3,8. Can we narrow the values +we obtain? ```{julia chap_6_plot_7} histogram(chain_uniform[:g], xlim=[1,6], legend=false, normalized=true); xlabel!("g_mars"); -ylabel!("Probability"); +ylabel!("Probability density"); title!("Posterior distribution for g with Uniform prior") ``` @@ -233,7 +231,7 @@ variance of 2, and let the model update its beliefs with the points we have. ```{julia chap_6_plot_8} plot(Normal(5,2), legend=false, fill=(0, .5,:lightblue)); xlabel!("g_mars"); -ylabel!("Probability"); +ylabel!("Probability density"); title!("Normal prior distribution for g") ``` We define then the model with a gaussian distribution as a prior for $g$: @@ -254,26 +252,27 @@ end Now we sample values from the posterior distribution and plot and histogram with the values obtained: ```{julia,results = FALSE} -chain_normal = sample(gravity_normal(t_measured, Δx_measured, θ), HMC(ϵ, τ), iterations, progress=false) +chain_normal = sample(gravity_normal(Δt, Δx, θ), NUTS(), iterations, progress=false) ``` ```{julia chap_6_plot_9} histogram(chain_normal[:g], xlim=[3,4.5], legend=false, normalized=true, title="Posterior distribution for g with Normal prior"); xlabel!("g_mars"); -ylabel!("Probability"); +ylabel!("Probability density"); title!("Posterior distribution for g with Normal prior") ``` We see that the plausible values for the gravity have a clear center in 3.7 and now the distribution is narrower, that's good, but we can do better. +If we observe the prior distribution proposed for $g$, we see that some values are negative, which makes no sense because if that would the +case when you trow the stone, it would go up and up, escaping from the planet. -If we observe the prior distribution proposed for $g$, we see that some values are negative, which makes no sense because if that would the case when you trow the stone, it would go up and up, escaping from the planet. - -We propose then a new model for not allowing the negative values to happen. The distribution we are interested in is a LogNormal distribution. In the plot below is the prior distribution for g, a LogNormal distribution with mean 1.5 and variance of 0.5. +We propose then a new model for not allowing the negative values to happen. The distribution we are interested in is a LogNormal distribution. +In the plot below is the prior distribution for g, a LogNormal distribution with mean 1.5 and variance of 0.5. ```{julia chap_6_plot_10} plot(LogNormal(1,0.5), xlim=(0,10), legend=false, fill=(0, .5,:lightblue)); xlabel!("g_mars"); -ylabel!("Probability"); +ylabel!("Probability density"); title!("Prior LogNormal distribution for g") ``` @@ -293,26 +292,28 @@ end ``` ```{julia,results = FALSE} -chain_lognormal = sample(gravity_lognormal(t_measured, Δx_measured, θ), HMC(ϵ, τ), iterations, progress=false) +chain_lognormal = sample(gravity_lognormal(Δt, Δx, θ), NUTS(), iterations, progress=false) ``` ```{julia chap_6_plot_11} histogram(chain_lognormal[:g], xlim=[3,4.5], legend=false, title="Posterior distribution for g with LogNormal prior", normalized=true); xlabel!("g_mars"); -ylabel!("Probability"); +ylabel!("Probability density"); title!("Posterior distribution for g with LogNormal prior") ``` ## Optimizing the throwing angle -Now that we have a good understanding of the equations and the overall problem, we are going to add some difficulties and we will loosen a constrain we have imposed: Suppose that the device employed to measure the angle has an error of 15°, no matter the angle. +Let us remove the angle constraint and add complexity to the problem. We will consider that the device measuring the angle has a +15 degree error, no matter the angle. -*We want to know what are the most convenient angle to do the experiment and to measure or if it doesn't matter.* +Does a most convenient angle to do the experiment exist? Or it doesn’t matter? +Is there an angle that is most convenient for making the experiment? Or doesn't it even matter? To do the analysis we need to see how the angle influence the computation of $g$, so solving the equation for $g$ we have: $g = \frac{2*tg(\theta)*\Delta x}{t^{2}_f}$ -We can plot then the tangent of θ, with and error of 15° and see what is its maximum and minimun value: +We can plot then the tangent of θ, with a 15 degree error and see what their maximum and minimum values are: ```{julia, results = FALSE} angles = 0:0.1:70 @@ -343,8 +344,10 @@ title!("Percentual error"); vline!([angles[findfirst(x->x==minimum(perc_error), perc_error)]], lw=3, label="Minimum error") ``` -So, now we see that the lowest percentual error is obtained when we work in angles near 45°, so we are good to go and we can use the data we measured adding the error in the angle. -We now define the new model, where we include an uncertainty in the angle. We propose an uniform prior for the angle centered at 45°, the angle we think the measurement was done. +So, now we see that the lowest percentual error is obtained when we work in angles near 45°, so we are good to go and we can use the data we +measured adding the error in the angle. +We now define the new model, where we include an uncertainty in the angle. We propose an uniform prior for the angle centered at 45°, +the angle we think the measurement was done. ```{julia, results = FALSE} @model gravity_angle_uniform(t_final, x_final, θ) = begin @@ -362,13 +365,13 @@ end ``` ```{julia, results = FALSE} -chain_uniform_angle = sample(gravity_angle_uniform(t_measured, Δx_measured, θ), HMC(ϵ, τ), iterations, progress=false) +chain_uniform_angle = sample(gravity_angle_uniform(Δt, Δx, θ), NUTS(), iterations, progress=false) ``` ```{julia chap_6_plot_14} histogram(chain_uniform_angle[:g], legend=false, normalized=true); xlabel!("g_mars"); -ylabel!("Probability"); +ylabel!("Probability density"); title!("Posterior distribution for g, including uncertainty in the angle") ``` @@ -403,18 +406,20 @@ v = 11 .* sqrt.(chain_uniform_angle[:g] ./ (9.8*2)) histogram(v, legend=false, normalized=true); title!("Escape velocity from Mars"); xlabel!("Escape Velocity of Mars [km/s]"); -ylabel!("Probability") +ylabel!("Probability density") ``` Finally, we obtained the escape velocity scape from Mars. ## Summary In this chapter we had to find the escape velocity from Mars. -To solve this problem, we first needed to find the gravity of Mars, so we started with a physical description of the problem and concluded that by measuring the distance and time of a rock throw plus some Bayesian analysis we could infer the gravity of Mars. +To solve this problem, we first needed to find the gravity of Mars, so we started with a physical description of the problem and concluded +that by measuring the distance and time of a rock throw plus some Bayesian analysis we could infer the gravity of Mars. Then we created a simple probabilistic model, with the prior probability set to a uniform distribution and the likelihood to a normal distribution. We sampled the model and obtained our first posterior probability. -We repeated this process two more times, changing the prior distribution of the model for more accurate ones, first with a normal distribution and then with a logarithmic one. +We repeated this process two more times, changing the prior distribution of the model for more accurate ones, first with a Normal distribution and +then with a LogNormal one. Finally, we used the gravity we inferred to calculate the escape velocity from Mars. diff --git a/06_gravity/Manifest.toml b/06_gravity/Manifest.toml index 25e25813..a838b539 100644 --- a/06_gravity/Manifest.toml +++ b/06_gravity/Manifest.toml @@ -2,50 +2,50 @@ [[AbstractFFTs]] deps = ["ChainRulesCore", "LinearAlgebra"] -git-tree-sha1 = "69f7020bd72f069c219b5e8c236c1fa90d2cb409" +git-tree-sha1 = "16b6dbc4cf7caee4e1e75c49485ec67b667098a0" uuid = "621f4979-c628-5d54-868e-fcf4e3e8185c" -version = "1.2.1" +version = "1.3.1" [[AbstractMCMC]] -deps = ["BangBang", "ConsoleProgressMonitor", "Distributed", "Logging", "LoggingExtras", "ProgressLogging", "Random", "StatsBase", "TerminalLoggers", "Transducers"] -git-tree-sha1 = "5c26c7759412ffcaf0dd6e3172e55d783dd7610b" +deps = ["BangBang", "ConsoleProgressMonitor", "Distributed", "LogDensityProblems", "Logging", "LoggingExtras", "ProgressLogging", "Random", "StatsBase", "TerminalLoggers", "Transducers"] +git-tree-sha1 = "323799cab36200a01f5e9da3fecbd58329e2dd27" uuid = "80f14c24-f653-4e6a-9b94-39d6b0f70001" -version = "4.1.3" +version = "4.4.0" [[AbstractPPL]] -deps = ["AbstractMCMC", "DensityInterface", "Setfield", "SparseArrays"] -git-tree-sha1 = "6320752437e9fbf49639a410017d862ad64415a5" +deps = ["AbstractMCMC", "DensityInterface", "Random", "Setfield", "SparseArrays"] +git-tree-sha1 = "33ea6c6837332395dbf3ba336f273c9f7fcf4db9" uuid = "7a57a42e-76ec-4ea3-a279-07e840d6d9cf" -version = "0.5.2" +version = "0.5.4" [[AbstractTrees]] -git-tree-sha1 = "52b3b436f8f73133d7bc3a6c71ee7ed6ab2ab754" +git-tree-sha1 = "faa260e4cb5aba097a73fab382dd4b5819d8ec8c" uuid = "1520ce14-60c1-5f80-bbc7-55ef81b5835c" -version = "0.4.3" +version = "0.4.4" [[Adapt]] -deps = ["LinearAlgebra"] -git-tree-sha1 = "195c5505521008abea5aee4f96930717958eac6f" +deps = ["LinearAlgebra", "Requires"] +git-tree-sha1 = "cc37d689f599e8df4f464b2fa3870ff7db7492ef" uuid = "79e6a3ab-5dfb-504d-930d-738a2a938a0e" -version = "3.4.0" +version = "3.6.1" [[AdvancedHMC]] -deps = ["AbstractMCMC", "ArgCheck", "DocStringExtensions", "InplaceOps", "LinearAlgebra", "ProgressMeter", "Random", "Requires", "Setfield", "Statistics", "StatsBase", "StatsFuns", "UnPack"] -git-tree-sha1 = "0091e2e4d0a7125da0e3ad8c7dbff9171a921461" +deps = ["AbstractMCMC", "ArgCheck", "DocStringExtensions", "InplaceOps", "LinearAlgebra", "LogDensityProblems", "LogDensityProblemsAD", "ProgressMeter", "Random", "Requires", "Setfield", "Statistics", "StatsBase", "StatsFuns", "UnPack"] +git-tree-sha1 = "c4a73dfdcce33e17c27b8063eae825fc15631cf8" uuid = "0bf59076-c3b1-5ca4-86bd-e02cd72cde3d" -version = "0.3.6" +version = "0.4.3" [[AdvancedMH]] -deps = ["AbstractMCMC", "Distributions", "Random", "Requires"] -git-tree-sha1 = "d7a7dabeaef34e5106cdf6c2ac956e9e3f97f666" +deps = ["AbstractMCMC", "Distributions", "LogDensityProblems", "Random", "Requires"] +git-tree-sha1 = "dffd459dbda082ef129c4e897dde2373c64771d2" uuid = "5b7e9947-ddc0-4b3f-9b55-0d8042f74170" -version = "0.6.8" +version = "0.7.2" [[AdvancedPS]] -deps = ["AbstractMCMC", "Distributions", "Libtask", "Random", "StatsFuns"] -git-tree-sha1 = "9ff1247be1e2aa2e740e84e8c18652bd9d55df22" +deps = ["AbstractMCMC", "Distributions", "Libtask", "Random", "Random123", "StatsFuns"] +git-tree-sha1 = "4d73400b3583147b1b639794696c78202a226584" uuid = "576499cb-2369-40b2-a588-c64705576edc" -version = "0.3.8" +version = "0.4.3" [[AdvancedVI]] deps = ["Bijectors", "Distributions", "DistributionsAD", "DocStringExtensions", "ForwardDiff", "LinearAlgebra", "ProgressMeter", "Random", "Requires", "StatsBase", "StatsFuns", "Tracker"] @@ -70,9 +70,9 @@ version = "0.2.0" [[Arpack]] deps = ["Arpack_jll", "Libdl", "LinearAlgebra", "Logging"] -git-tree-sha1 = "91ca22c4b8437da89b030f08d71db55a379ce958" +git-tree-sha1 = "9b9b347613394885fd1c8c7729bfc60528faa436" uuid = "7d9fca2a-8960-54d3-9f78-7d1dccf2cb97" -version = "0.5.3" +version = "0.5.4" [[Arpack_jll]] deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "OpenBLAS_jll", "Pkg"] @@ -80,17 +80,11 @@ git-tree-sha1 = "5ba6c757e8feccf03a1554dfaf3e26b3cfc7fd5e" uuid = "68821587-b530-5797-8361-c406ea357684" version = "3.5.1+1" -[[ArrayInterfaceCore]] -deps = ["LinearAlgebra", "SparseArrays", "SuiteSparse"] -git-tree-sha1 = "e6cba4aadba7e8a7574ab2ba2fcfb307b4c4b02a" -uuid = "30b0a656-2188-435a-8636-2ec0e6a096e2" -version = "0.1.23" - -[[ArrayInterfaceStaticArraysCore]] -deps = ["Adapt", "ArrayInterfaceCore", "LinearAlgebra", "StaticArraysCore"] -git-tree-sha1 = "93c8ba53d8d26e124a5a8d4ec914c3a16e6a0970" -uuid = "dd5226c6-a4d4-4bc7-8575-46859f9c95b9" -version = "0.1.3" +[[ArrayInterface]] +deps = ["Adapt", "LinearAlgebra", "Requires", "SnoopPrecompile", "SparseArrays", "SuiteSparse"] +git-tree-sha1 = "a89acc90c551067cd84119ff018619a1a76c6277" +uuid = "4fba245c-0d91-5ea0-9b3e-6abc04ee57a9" +version = "7.2.1" [[Artifacts]] uuid = "56f22d72-fd6d-98f1-02f0-08ddc0907c33" @@ -123,14 +117,9 @@ version = "0.1.1" [[Bijectors]] deps = ["ArgCheck", "ChainRulesCore", "ChangesOfVariables", "Compat", "Distributions", "Functors", "InverseFunctions", "IrrationalConstants", "LinearAlgebra", "LogExpFunctions", "MappedArrays", "Random", "Reexport", "Requires", "Roots", "SparseArrays", "Statistics"] -git-tree-sha1 = "a3704b8e5170f9339dff4e6cb286ad49464d3646" +git-tree-sha1 = "4f8d8df1f690c44e46464266ec928aaa5aabb299" uuid = "76274a88-744f-5084-9051-94815aaf08c4" -version = "0.10.6" - -[[BitFlags]] -git-tree-sha1 = "84259bb6172806304b9101094a7cc4bc6f56dbc6" -uuid = "d1d4a3ce-64b1-5f1a-9ba4-7e7e69966f35" -version = "0.1.5" +version = "0.10.7" [[Bzip2_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] @@ -144,7 +133,7 @@ uuid = "fa961155-64e5-5f13-b03f-caf6b980ea82" version = "0.4.2" [[Cairo_jll]] -deps = ["Artifacts", "Bzip2_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "JLLWrappers", "LZO_jll", "Libdl", "Pixman_jll", "Pkg", "Xorg_libXext_jll", "Xorg_libXrender_jll", "Zlib_jll", "libpng_jll"] +deps = ["Artifacts", "Bzip2_jll", "CompilerSupportLibraries_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "JLLWrappers", "LZO_jll", "Libdl", "Pixman_jll", "Pkg", "Xorg_libXext_jll", "Xorg_libXrender_jll", "Zlib_jll", "libpng_jll"] git-tree-sha1 = "4b859a208b2397a7a623a03449e4636bdb17bcf2" uuid = "83423d85-b0ee-5818-9007-b63ccbeb887a" version = "1.16.1+1" @@ -163,21 +152,21 @@ version = "0.2.2" [[ChainRules]] deps = ["Adapt", "ChainRulesCore", "Compat", "Distributed", "GPUArraysCore", "IrrationalConstants", "LinearAlgebra", "Random", "RealDot", "SparseArrays", "Statistics", "StructArrays"] -git-tree-sha1 = "d7d816527558cb8373e8f7a746d88eb8a167b023" +git-tree-sha1 = "7d20c2fb8ab838e41069398685e7b6b5f89ed85b" uuid = "082447d4-558c-5d27-93f4-14fc19e9eca2" -version = "1.44.7" +version = "1.48.0" [[ChainRulesCore]] deps = ["Compat", "LinearAlgebra", "SparseArrays"] -git-tree-sha1 = "e7ff6cadf743c098e08fca25c91103ee4303c9bb" +git-tree-sha1 = "c6d890a52d2c4d55d326439580c3b8d0875a77d9" uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4" -version = "1.15.6" +version = "1.15.7" [[ChangesOfVariables]] deps = ["ChainRulesCore", "LinearAlgebra", "Test"] -git-tree-sha1 = "38f7a08f19d8810338d4f5085211c7dfa5d5bdd8" +git-tree-sha1 = "485193efd2176b88e6622a39a246f8c5b600e74e" uuid = "9e997f8a-9a97-42d5-a9f1-ce6bfc15e2c0" -version = "0.1.4" +version = "0.1.6" [[Clustering]] deps = ["Distances", "LinearAlgebra", "NearestNeighbors", "Printf", "Random", "SparseArrays", "Statistics", "StatsBase"] @@ -185,17 +174,11 @@ git-tree-sha1 = "64df3da1d2a26f4de23871cd1b6482bb68092bd5" uuid = "aaaa29a8-35af-508c-8bc3-b662a17a0fe5" version = "0.14.3" -[[CodecZlib]] -deps = ["TranscodingStreams", "Zlib_jll"] -git-tree-sha1 = "ded953804d019afa9a3f98981d99b33e3db7b6da" -uuid = "944b1d66-785c-5afd-91f1-9de20f533193" -version = "0.7.0" - [[ColorSchemes]] -deps = ["ColorTypes", "ColorVectorSpace", "Colors", "FixedPointNumbers", "Random"] -git-tree-sha1 = "1fd869cc3875b57347f7027521f561cf46d1fcd8" +deps = ["ColorTypes", "ColorVectorSpace", "Colors", "FixedPointNumbers", "Random", "SnoopPrecompile"] +git-tree-sha1 = "aa3edc8f8dea6cbfa176ee12f7c2fc82f0608ed3" uuid = "35d6a980-a343-548e-a6ea-1d62b119f2f4" -version = "3.19.0" +version = "3.20.0" [[ColorTypes]] deps = ["FixedPointNumbers", "Random"] @@ -205,15 +188,15 @@ version = "0.11.4" [[ColorVectorSpace]] deps = ["ColorTypes", "FixedPointNumbers", "LinearAlgebra", "SpecialFunctions", "Statistics", "TensorCore"] -git-tree-sha1 = "d08c20eef1f2cbc6e60fd3612ac4340b89fea322" +git-tree-sha1 = "600cc5508d66b78aae350f7accdb58763ac18589" uuid = "c3611d14-8923-5661-9e6a-0046d554d3a4" -version = "0.9.9" +version = "0.9.10" [[Colors]] deps = ["ColorTypes", "FixedPointNumbers", "Reexport"] -git-tree-sha1 = "417b0ed7b8b838aa6ca0a87aadf1bb9eb111ce40" +git-tree-sha1 = "fc08e5930ee9a4e03f84bfb5211cb54e7769758a" uuid = "5ae59095-9a9b-59fe-a467-6f913c188581" -version = "0.12.8" +version = "0.12.10" [[Combinatorics]] git-tree-sha1 = "08c8b6831dc00bfea825826be0bc8336fc369860" @@ -233,9 +216,9 @@ version = "0.3.0" [[Compat]] deps = ["Dates", "LinearAlgebra", "UUIDs"] -git-tree-sha1 = "3ca828fe1b75fa84b021a7860bd039eaea84d2f2" +git-tree-sha1 = "7a60c856b9fa189eb34f5f8a6f6b5529b7942957" uuid = "34da2185-b29b-5c13-b0c7-acf172513d20" -version = "4.3.0" +version = "4.6.1" [[CompilerSupportLibraries_jll]] deps = ["Artifacts", "Libdl"] @@ -260,9 +243,9 @@ version = "0.1.2" [[ConstructionBase]] deps = ["LinearAlgebra"] -git-tree-sha1 = "fb21ddd70a051d882a1686a5a550990bbe371a95" +git-tree-sha1 = "89a9db8d28102b094992472d333674bd1a83ce2a" uuid = "187b0558-2788-49d3-abe0-74a17ed4e7c9" -version = "1.4.1" +version = "1.5.1" [[Contour]] git-tree-sha1 = "d05d9e7b7aedff4e5b51a029dced05cfb6125781" @@ -286,9 +269,9 @@ uuid = "dc8bdbbb-1ca9-579f-8c36-e416f6a65cce" version = "1.0.2" [[DataAPI]] -git-tree-sha1 = "46d2680e618f8abd007bce0c3026cb0c4a8f2032" +git-tree-sha1 = "e8119c1a33d267e16108be441a287a6981ba1630" uuid = "9a962f9c-6df0-11e9-0e5d-c546b8b5ee8a" -version = "1.12.0" +version = "1.14.0" [[DataStructures]] deps = ["Compat", "InteractiveUtils", "OrderedCollections"] @@ -334,15 +317,15 @@ version = "1.1.0" [[DiffRules]] deps = ["IrrationalConstants", "LogExpFunctions", "NaNMath", "Random", "SpecialFunctions"] -git-tree-sha1 = "8b7a4d23e22f5d44883671da70865ca98f2ebf9d" +git-tree-sha1 = "a4ad7ef19d2cdc2eff57abbbe68032b1cd0bd8f8" uuid = "b552c78f-8df3-52c6-915a-8e097449b14b" -version = "1.12.0" +version = "1.13.0" [[Distances]] deps = ["LinearAlgebra", "SparseArrays", "Statistics", "StatsAPI"] -git-tree-sha1 = "3258d0659f812acde79e8a74b11f17ac06d0ca04" +git-tree-sha1 = "49eba9ad9f7ead780bfb7ee319f962c811c6d3b2" uuid = "b4f34e82-e78d-54a5-968a-f98e89d6e8f7" -version = "0.10.7" +version = "0.10.8" [[Distributed]] deps = ["Random", "Serialization", "Sockets"] @@ -350,9 +333,9 @@ uuid = "8ba89e20-285c-5b6f-9357-94700520ee1b" [[Distributions]] deps = ["ChainRulesCore", "DensityInterface", "FillArrays", "LinearAlgebra", "PDMats", "Printf", "QuadGK", "Random", "SparseArrays", "SpecialFunctions", "Statistics", "StatsBase", "StatsFuns", "Test"] -git-tree-sha1 = "04db820ebcfc1e053bd8cbb8d8bccf0ff3ead3f7" +git-tree-sha1 = "da9e1a9058f8d3eec3a8c9fe4faacfb89180066b" uuid = "31c24e10-a181-5473-b8eb-7969acd0382f" -version = "0.25.76" +version = "0.25.86" [[DistributionsAD]] deps = ["Adapt", "ChainRules", "ChainRulesCore", "Compat", "DiffRules", "Distributions", "FillArrays", "LinearAlgebra", "NaNMath", "PDMats", "Random", "Requires", "SpecialFunctions", "StaticArrays", "StatsBase", "StatsFuns", "ZygoteRules"] @@ -362,9 +345,9 @@ version = "0.6.43" [[DocStringExtensions]] deps = ["LibGit2"] -git-tree-sha1 = "c36550cb29cbe373e95b3f40486b9a4148f89ffd" +git-tree-sha1 = "2fb1e02f2b635d0845df5d7c167fec4dd739b00d" uuid = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae" -version = "0.9.2" +version = "0.9.3" [[Downloads]] deps = ["ArgTools", "FileWatching", "LibCURL", "NetworkOptions"] @@ -378,21 +361,21 @@ uuid = "fa6b7ba4-c1ee-5f82-b5fc-ecf0adba8f74" version = "0.6.8" [[DynamicPPL]] -deps = ["AbstractMCMC", "AbstractPPL", "BangBang", "Bijectors", "ChainRulesCore", "ConstructionBase", "Distributions", "DocStringExtensions", "LinearAlgebra", "MacroTools", "OrderedCollections", "Random", "Setfield", "Test", "ZygoteRules"] -git-tree-sha1 = "7bc3920ba1e577ad3d7ebac75602ab42b557e28e" +deps = ["AbstractMCMC", "AbstractPPL", "BangBang", "Bijectors", "ChainRulesCore", "ConstructionBase", "Distributions", "DocStringExtensions", "LinearAlgebra", "LogDensityProblems", "MacroTools", "OrderedCollections", "Random", "Setfield", "Test", "ZygoteRules"] +git-tree-sha1 = "932f5f977b04db019cc72ebd1f4161a6e7bded14" uuid = "366bfd00-2699-11ea-058f-f148b4cae6d8" -version = "0.20.2" +version = "0.21.6" [[EllipticalSliceSampling]] -deps = ["AbstractMCMC", "ArrayInterfaceCore", "Distributions", "Random", "Statistics"] -git-tree-sha1 = "4cda4527e990c0cc201286e0a0bfbbce00abcfc2" +deps = ["AbstractMCMC", "ArrayInterface", "Distributions", "Random", "Statistics"] +git-tree-sha1 = "973b4927d112559dc737f55d6bf06503a5b3fc14" uuid = "cad2338a-1db2-11e9-3401-43bc07c9ede2" -version = "1.0.0" +version = "1.1.0" [[EnumX]] -git-tree-sha1 = "e5333cd1e1c713ee21d07b6ed8b0d8853fabe650" +git-tree-sha1 = "bdb1942cd4c45e3c678fd11569d5cccd80976237" uuid = "4e289a0a-7415-4d19-859d-a7e5c4648b56" -version = "1.0.3" +version = "1.0.4" [[Expat_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] @@ -401,9 +384,9 @@ uuid = "2e619515-83b5-522b-bb60-26c02a35a201" version = "2.4.8+0" [[ExprTools]] -git-tree-sha1 = "56559bbef6ca5ea0c0818fa5c90320398a6fbf8d" +git-tree-sha1 = "c1d06d129da9f55715c6c212866f5b1bddc5fa00" uuid = "e2ba6199-217a-4e67-a87a-7c52f15ade04" -version = "0.1.8" +version = "0.1.9" [[FFMPEG]] deps = ["FFMPEG_jll"] @@ -425,9 +408,9 @@ version = "0.3.2" [[FFTW]] deps = ["AbstractFFTs", "FFTW_jll", "LinearAlgebra", "MKL_jll", "Preferences", "Reexport"] -git-tree-sha1 = "90630efff0894f8142308e334473eba54c433549" +git-tree-sha1 = "f9818144ce7c8c41edf5c4c179c684d92aa4d9fe" uuid = "7a1cc6ca-52ef-59f5-83cd-3a7055c09341" -version = "1.5.0" +version = "1.6.0" [[FFTW_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] @@ -446,9 +429,9 @@ uuid = "7b1f6079-737a-58dc-b8bc-7a2ca5c1b5ee" [[FillArrays]] deps = ["LinearAlgebra", "Random", "SparseArrays", "Statistics"] -git-tree-sha1 = "802bfc139833d2ba893dd9e62ba1767c88d708ae" +git-tree-sha1 = "d3ba08ab64bdfd27234d3f61956c966266757fe6" uuid = "1a297f60-69ca-5386-bcde-b61e274b549b" -version = "0.13.5" +version = "0.13.7" [[FixedPointNumbers]] deps = ["Statistics"] @@ -470,9 +453,9 @@ version = "0.4.2" [[ForwardDiff]] deps = ["CommonSubexpressions", "DiffResults", "DiffRules", "LinearAlgebra", "LogExpFunctions", "NaNMath", "Preferences", "Printf", "Random", "SpecialFunctions", "StaticArrays"] -git-tree-sha1 = "187198a4ed8ccd7b5d99c41b69c679269ea2b2d4" +git-tree-sha1 = "00e252f4d706b3d55a8863432e742bf5717b498d" uuid = "f6369f11-7733-5829-9624-2563aa707210" -version = "0.10.32" +version = "0.10.35" [[FreeType2_jll]] deps = ["Artifacts", "Bzip2_jll", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll"] @@ -493,9 +476,9 @@ version = "1.1.3" [[FunctionWrappersWrappers]] deps = ["FunctionWrappers"] -git-tree-sha1 = "a5e6e7f12607e90d71b09e6ce2c965e41b337968" +git-tree-sha1 = "b104d487b34566608f8b4e1c39fb0b10aa279ff8" uuid = "77dc65aa-8811-40c2-897b-53d922fa7daf" -version = "0.1.1" +version = "0.1.3" [[Functors]] deps = ["LinearAlgebra"] @@ -515,21 +498,21 @@ version = "3.3.8+0" [[GPUArraysCore]] deps = ["Adapt"] -git-tree-sha1 = "6872f5ec8fd1a38880f027a26739d42dcda6691f" +git-tree-sha1 = "1cd7f0af1aa58abc02ea1d872953a97359cb87fa" uuid = "46192b85-c4d5-4398-a991-12ede77f4527" -version = "0.1.2" +version = "0.1.4" [[GR]] -deps = ["Base64", "DelimitedFiles", "GR_jll", "HTTP", "JSON", "Libdl", "LinearAlgebra", "Pkg", "Preferences", "Printf", "Random", "Serialization", "Sockets", "Test", "UUIDs"] -git-tree-sha1 = "00a9d4abadc05b9476e937a5557fcce476b9e547" +deps = ["Artifacts", "Base64", "DelimitedFiles", "Downloads", "GR_jll", "HTTP", "JSON", "Libdl", "LinearAlgebra", "Pkg", "Preferences", "Printf", "Random", "Serialization", "Sockets", "TOML", "Tar", "Test", "UUIDs", "p7zip_jll"] +git-tree-sha1 = "660b2ea2ec2b010bb02823c6d0ff6afd9bdc5c16" uuid = "28b8d3ca-fb5f-59d9-8090-bfdbd6d07a71" -version = "0.69.5" +version = "0.71.7" [[GR_jll]] -deps = ["Artifacts", "Bzip2_jll", "Cairo_jll", "FFMPEG_jll", "Fontconfig_jll", "GLFW_jll", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Libtiff_jll", "Pixman_jll", "Pkg", "Qt5Base_jll", "Zlib_jll", "libpng_jll"] -git-tree-sha1 = "bc9f7725571ddb4ab2c4bc74fa397c1c5ad08943" +deps = ["Artifacts", "Bzip2_jll", "Cairo_jll", "FFMPEG_jll", "Fontconfig_jll", "GLFW_jll", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Libtiff_jll", "Pixman_jll", "Qt5Base_jll", "Zlib_jll", "libpng_jll"] +git-tree-sha1 = "d5e1fd17ac7f3aa4c5287a61ee28d4f8b8e98873" uuid = "d2c73de3-f751-5644-a686-071e5b155ba9" -version = "0.69.1+0" +version = "0.71.7+0" [[Gettext_jll]] deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "XML2_jll"] @@ -539,15 +522,15 @@ version = "0.21.0+0" [[Ghostscript_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] -git-tree-sha1 = "78e2c69783c9753a91cdae88a8d432be85a2ab5e" +git-tree-sha1 = "43ba3d3c82c18d88471cfd2924931658838c9d8f" uuid = "61579ee1-b43e-5ca0-a5da-69d92c66a64b" -version = "9.55.0+0" +version = "9.55.0+4" [[Glib_jll]] deps = ["Artifacts", "Gettext_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Libiconv_jll", "Libmount_jll", "PCRE2_jll", "Pkg", "Zlib_jll"] -git-tree-sha1 = "fb83fbe02fe57f2c068013aa94bcdf6760d3a7a7" +git-tree-sha1 = "d3b3624125c1474292d0d8ed0f65554ac37ddb23" uuid = "7746bdde-850d-59dc-9ae8-88ece973131d" -version = "2.74.0+1" +version = "2.74.0+2" [[Graphics]] deps = ["Colors", "LinearAlgebra", "NaNMath"] @@ -563,9 +546,9 @@ version = "1.3.14+0" [[Graphs]] deps = ["ArnoldiMethod", "Compat", "DataStructures", "Distributed", "Inflate", "LinearAlgebra", "Random", "SharedArrays", "SimpleTraits", "SparseArrays", "Statistics"] -git-tree-sha1 = "ba2d094a88b6b287bd25cfa86f301e7693ffae2f" +git-tree-sha1 = "1cf1d7dcb4bc32d7b4a5add4232db3750c27ecb4" uuid = "86223c79-3864-5bf0-83f7-82e725a168b6" -version = "1.7.4" +version = "1.8.0" [[Grisu]] git-tree-sha1 = "53bb909d1151e57e2484c3d1b53e19552b887fb2" @@ -573,10 +556,10 @@ uuid = "42e2da0e-8278-4e71-bc24-59509adca0fe" version = "1.0.2" [[HTTP]] -deps = ["Base64", "CodecZlib", "Dates", "IniFile", "Logging", "LoggingExtras", "MbedTLS", "NetworkOptions", "OpenSSL", "Random", "SimpleBufferStream", "Sockets", "URIs", "UUIDs"] -git-tree-sha1 = "a97d47758e933cd5fe5ea181d178936a9fc60427" +deps = ["Base64", "Dates", "IniFile", "Logging", "MbedTLS", "NetworkOptions", "Sockets", "URIs"] +git-tree-sha1 = "0fa77022fe4b511826b39c894c90daf5fce3334a" uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3" -version = "1.5.1" +version = "0.9.17" [[HarfBuzz_jll]] deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "Graphite2_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg"] @@ -621,10 +604,10 @@ uuid = "51556ac3-7006-55f5-8cb3-34580c88182d" version = "0.2.16" [[ImageFiltering]] -deps = ["CatIndices", "ComputationalResources", "DataStructures", "FFTViews", "FFTW", "ImageBase", "ImageCore", "LinearAlgebra", "OffsetArrays", "Reexport", "SparseArrays", "StaticArrays", "Statistics", "TiledIteration"] -git-tree-sha1 = "8b251ec0582187eff1ee5c0220501ef30a59d2f7" +deps = ["CatIndices", "ComputationalResources", "DataStructures", "FFTViews", "FFTW", "ImageBase", "ImageCore", "LinearAlgebra", "OffsetArrays", "Reexport", "SnoopPrecompile", "SparseArrays", "StaticArrays", "Statistics", "TiledIteration"] +git-tree-sha1 = "f265e53558fbbf23e0d54e4fab7106c0f2a9e576" uuid = "6a3955dd-da59-5b1f-98d4-e7296123deb5" -version = "0.7.2" +version = "0.7.3" [[ImageIO]] deps = ["FileIO", "IndirectArrays", "JpegTurbo", "LazyModules", "Netpbm", "OpenEXR", "PNGFiles", "QOI", "Sixel", "TiffImages", "UUIDs"] @@ -657,22 +640,22 @@ uuid = "787d08f9-d448-5407-9aad-5290dd7ab264" version = "0.3.2" [[ImageQualityIndexes]] -deps = ["ImageContrastAdjustment", "ImageCore", "ImageDistances", "ImageFiltering", "LazyModules", "OffsetArrays", "Statistics"] -git-tree-sha1 = "0c703732335a75e683aec7fdfc6d5d1ebd7c596f" +deps = ["ImageContrastAdjustment", "ImageCore", "ImageDistances", "ImageFiltering", "LazyModules", "OffsetArrays", "SnoopPrecompile", "Statistics"] +git-tree-sha1 = "5985d467623f106523ed8351f255642b5141e7be" uuid = "2996bd0c-7a13-11e9-2da2-2f5ce47296a9" -version = "0.3.3" +version = "0.3.4" [[ImageSegmentation]] deps = ["Clustering", "DataStructures", "Distances", "Graphs", "ImageCore", "ImageFiltering", "ImageMorphology", "LinearAlgebra", "MetaGraphs", "RegionTrees", "SimpleWeightedGraphs", "StaticArrays", "Statistics"] -git-tree-sha1 = "36832067ea220818d105d718527d6ed02385bf22" +git-tree-sha1 = "fb0b597b4928e29fed0597724cfbb5940974f8ca" uuid = "80713f31-8817-5129-9cf8-209ff8fb23e1" -version = "1.7.0" +version = "1.8.0" [[ImageShow]] -deps = ["Base64", "FileIO", "ImageBase", "ImageCore", "OffsetArrays", "StackViews"] -git-tree-sha1 = "b563cf9ae75a635592fc73d3eb78b86220e55bd8" +deps = ["Base64", "ColorSchemes", "FileIO", "ImageBase", "ImageCore", "OffsetArrays", "StackViews"] +git-tree-sha1 = "ce28c68c900eed3cdbfa418be66ed053e54d4f56" uuid = "4e3cecfd-b093-5904-9786-8bbb286a6a31" -version = "0.3.6" +version = "0.3.7" [[ImageTransformations]] deps = ["AxisAlgorithms", "ColorVectorSpace", "CoordinateTransformations", "ImageBase", "ImageCore", "Interpolations", "OffsetArrays", "Rotations", "StaticArrays"] @@ -736,15 +719,15 @@ uuid = "b77e0a4c-d291-57a0-90e8-8db25a27a240" [[Interpolations]] deps = ["Adapt", "AxisAlgorithms", "ChainRulesCore", "LinearAlgebra", "OffsetArrays", "Random", "Ratios", "Requires", "SharedArrays", "SparseArrays", "StaticArrays", "WoodburyMatrices"] -git-tree-sha1 = "842dd89a6cb75e02e85fdd75c760cdc43f5d6863" +git-tree-sha1 = "721ec2cf720536ad005cb38f50dbba7b02419a15" uuid = "a98d9a8b-a2ab-59e6-89dd-64a1c18fca59" -version = "0.14.6" +version = "0.14.7" [[IntervalSets]] deps = ["Dates", "Random", "Statistics"] -git-tree-sha1 = "3f91cd3f56ea48d4d2a75c2a65455c5fc74fa347" +git-tree-sha1 = "16c0cc91853084cb5f58a78bd209513900206ce6" uuid = "8197267c-284f-5f27-9208-e0e47529a953" -version = "0.7.3" +version = "0.7.4" [[InverseFunctions]] deps = ["Test"] @@ -753,9 +736,9 @@ uuid = "3587e190-3f89-42d0-90ee-14403ec27112" version = "0.1.8" [[InvertedIndices]] -git-tree-sha1 = "bee5f1ef5bf65df56bdd2e40447590b272a5471f" +git-tree-sha1 = "82aec7a3dd64f4d9584659dc0b62ef7db2ef3e19" uuid = "41ab1584-1d38-5bbf-9106-f11c6c58b48f" -version = "1.1.0" +version = "1.2.0" [[IrrationalConstants]] git-tree-sha1 = "7fd44fd4ff43fc60815f8e764c0f352b83c49151" @@ -773,10 +756,10 @@ uuid = "82899510-4779-5014-852e-03e436cf321d" version = "1.0.0" [[JLD2]] -deps = ["FileIO", "MacroTools", "Mmap", "OrderedCollections", "Pkg", "Printf", "Reexport", "TranscodingStreams", "UUIDs"] -git-tree-sha1 = "1c3ff7416cb727ebf4bab0491a56a296d7b8cf1d" +deps = ["FileIO", "MacroTools", "Mmap", "OrderedCollections", "Pkg", "Printf", "Reexport", "Requires", "TranscodingStreams", "UUIDs"] +git-tree-sha1 = "42c17b18ced77ff0be65957a591d34f4ed57c631" uuid = "033835bb-8acc-5ee8-8aae-3f567f8a3819" -version = "0.4.25" +version = "0.4.31" [[JLFzf]] deps = ["Pipe", "REPL", "Random", "fzf_jll"] @@ -798,15 +781,15 @@ version = "0.21.3" [[JpegTurbo]] deps = ["CEnum", "FileIO", "ImageCore", "JpegTurbo_jll", "TOML"] -git-tree-sha1 = "a77b273f1ddec645d1b7c4fd5fb98c8f90ad10a5" +git-tree-sha1 = "106b6aa272f294ba47e96bd3acbabdc0407b5c60" uuid = "b835a17e-a41a-41e7-81f0-2f016b05efe0" -version = "0.1.1" +version = "0.1.2" [[JpegTurbo_jll]] -deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] -git-tree-sha1 = "b53380851c6e6664204efb2e62cd24fa5c47e4ba" +deps = ["Artifacts", "JLLWrappers", "Libdl"] +git-tree-sha1 = "6f2675ef130a300a112286de91973805fcc5ffbc" uuid = "aacddb02-875f-59d6-b918-886e6ef4fbf8" -version = "2.1.2+0" +version = "2.1.91+0" [[KernelDensity]] deps = ["Distributions", "DocStringExtensions", "FFTW", "Interpolations", "StatsBase"] @@ -827,9 +810,9 @@ uuid = "88015f11-f218-50d7-93a8-a6af411a945d" version = "3.0.0+1" [[LRUCache]] -git-tree-sha1 = "d64a0aff6691612ab9fb0117b0995270871c5dfc" +git-tree-sha1 = "d862633ef6097461037a00a13f709a62ae4bdfdd" uuid = "8ac3fa9e-de4c-5943-b1dc-09c6b5f20637" -version = "1.3.0" +version = "1.4.0" [[LZO_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] @@ -844,9 +827,15 @@ version = "1.3.0" [[Latexify]] deps = ["Formatting", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "OrderedCollections", "Printf", "Requires"] -git-tree-sha1 = "ab9aa169d2160129beb241cb2750ca499b4e90e9" +git-tree-sha1 = "2422f47b34d4b127720a18f86fa7b1aa2e141f29" uuid = "23fbe1c1-3f47-55db-b15f-69d7ec21a316" -version = "0.15.17" +version = "0.15.18" + +[[Lazy]] +deps = ["MacroTools"] +git-tree-sha1 = "1370f8202dac30758f3c345f9909b97f53d87d3f" +uuid = "50d2b5c4-7a5e-59d5-8109-a42b560f39c0" +version = "0.15.1" [[LazyArtifacts]] deps = ["Artifacts", "Pkg"] @@ -899,9 +888,9 @@ version = "1.8.7+0" [[Libglvnd_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg", "Xorg_libX11_jll", "Xorg_libXext_jll"] -git-tree-sha1 = "7739f837d6447403596a75d19ed01fd08d6f56bf" +git-tree-sha1 = "6f73d1dd803986947b2c750138528a999a6c7733" uuid = "7e76a0d4-f3c7-5321-8279-8d96eeed0f29" -version = "1.3.0+3" +version = "1.6.0+0" [[Libgpg_error_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] @@ -911,9 +900,9 @@ version = "1.42.0+0" [[Libiconv_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] -git-tree-sha1 = "42b62845d70a619f063a7da093d995ec8e15e778" +git-tree-sha1 = "c7cb1f5d892775ba13767a87c7ada0b980ea0a71" uuid = "94ce4f54-9a6c-5748-9c1c-f9c7231a4531" -version = "1.16.1+1" +version = "1.16.1+2" [[Libmount_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] @@ -923,9 +912,9 @@ version = "2.35.0+0" [[Libtask]] deps = ["FunctionWrappers", "LRUCache", "LinearAlgebra", "Statistics"] -git-tree-sha1 = "dfa6c5f2d5a8918dd97c7f1a9ea0de68c2365426" +git-tree-sha1 = "3e893f18b4326ed392b699a4948b30885125d371" uuid = "6f1fad26-d15e-5dc8-ae53-837a1d7b8c9f" -version = "0.7.5" +version = "0.8.5" [[Libtiff_jll]] deps = ["Artifacts", "JLLWrappers", "JpegTurbo_jll", "LERC_jll", "Libdl", "Pkg", "Zlib_jll", "Zstd_jll"] @@ -944,37 +933,43 @@ deps = ["Libdl", "libblastrampoline_jll"] uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" [[LogDensityProblems]] -deps = ["ArgCheck", "DocStringExtensions", "Random", "Requires", "UnPack"] -git-tree-sha1 = "408a29d70f8032b50b22155e6d7776715144b761" +deps = ["ArgCheck", "DocStringExtensions", "Random"] +git-tree-sha1 = "f9a11237204bc137617194d79d813069838fcf61" uuid = "6fdf6af0-433a-55f7-b3ed-c6c6e0b8df7c" -version = "1.0.2" +version = "2.1.1" + +[[LogDensityProblemsAD]] +deps = ["DocStringExtensions", "LogDensityProblems", "Requires", "SimpleUnPack"] +git-tree-sha1 = "9be6d4c96c6367535ed3fecb61af72cac06f023f" +uuid = "996a588d-648d-4e1f-a8f0-a84b347e47b1" +version = "1.4.0" [[LogExpFunctions]] deps = ["ChainRulesCore", "ChangesOfVariables", "DocStringExtensions", "InverseFunctions", "IrrationalConstants", "LinearAlgebra"] -git-tree-sha1 = "94d9c52ca447e23eac0c0f074effbcd38830deb5" +git-tree-sha1 = "0a1b7c2863e44523180fdb3146534e265a91870b" uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688" -version = "0.3.18" +version = "0.3.23" [[Logging]] uuid = "56ddb016-857b-54e1-b83d-db4d58db5568" [[LoggingExtras]] deps = ["Dates", "Logging"] -git-tree-sha1 = "5d4d2d9904227b8bd66386c1138cf4d5ffa826bf" +git-tree-sha1 = "cedb76b37bc5a6c702ade66be44f831fa23c681e" uuid = "e6f89c97-d47a-5376-807f-9c37f3926c36" -version = "0.4.9" +version = "1.0.0" [[MCMCChains]] -deps = ["AbstractMCMC", "AxisArrays", "Compat", "Dates", "Distributions", "Formatting", "IteratorInterfaceExtensions", "KernelDensity", "LinearAlgebra", "MCMCDiagnosticTools", "MLJModelInterface", "NaturalSort", "OrderedCollections", "PrettyTables", "Random", "RecipesBase", "Serialization", "Statistics", "StatsBase", "StatsFuns", "TableTraits", "Tables"] -git-tree-sha1 = "f5f347b828fd95ece7398f412c81569789361697" +deps = ["AbstractMCMC", "AxisArrays", "Dates", "Distributions", "Formatting", "IteratorInterfaceExtensions", "KernelDensity", "LinearAlgebra", "MCMCDiagnosticTools", "MLJModelInterface", "NaturalSort", "OrderedCollections", "PrettyTables", "Random", "RecipesBase", "Statistics", "StatsBase", "StatsFuns", "TableTraits", "Tables"] +git-tree-sha1 = "3d70a6e7f57cd0ba1af5284f5c15d8f6331983a2" uuid = "c7f686f2-ff18-58e9-bc7b-31028e88f75d" -version = "5.5.0" +version = "6.0.0" [[MCMCDiagnosticTools]] -deps = ["AbstractFFTs", "DataAPI", "Distributions", "LinearAlgebra", "MLJModelInterface", "Random", "SpecialFunctions", "Statistics", "StatsBase", "Tables"] -git-tree-sha1 = "59ac3cc5c08023f58b9cd6a5c447c4407cede6bc" +deps = ["AbstractFFTs", "DataAPI", "DataStructures", "Distributions", "LinearAlgebra", "MLJModelInterface", "Random", "SpecialFunctions", "Statistics", "StatsBase", "StatsFuns", "Tables"] +git-tree-sha1 = "889c36e76dbde08c54f5a8bb5eb5049aab1ef519" uuid = "be115224-59cd-429b-ad48-344e309966f0" -version = "0.1.4" +version = "0.3.1" [[MKL_jll]] deps = ["Artifacts", "IntelOpenMP_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "Pkg"] @@ -984,9 +979,9 @@ version = "2022.2.0+0" [[MLJModelInterface]] deps = ["Random", "ScientificTypesBase", "StatisticalTraits"] -git-tree-sha1 = "0a36882e73833d60dac49b00d203f73acfd50b85" +git-tree-sha1 = "c8b7e632d6754a5e36c0d94a4b466a5ba3a30128" uuid = "e80e1ace-859a-464e-9ed9-23947d8ae3ea" -version = "1.7.0" +version = "1.8.0" [[MacroTools]] deps = ["Markdown", "Random"] @@ -1015,15 +1010,15 @@ uuid = "c8ffd9c3-330d-5841-b78e-0817d7145fa1" version = "2.28.0+0" [[Measures]] -git-tree-sha1 = "e498ddeee6f9fdb4551ce855a46f54dbd900245f" +git-tree-sha1 = "c13304c81eec1ed3af7fc20e75fb6b26092a1102" uuid = "442fdcdd-2543-5da2-b0f3-8c86c306513e" -version = "0.3.1" +version = "0.3.2" [[MetaGraphs]] deps = ["Graphs", "JLD2", "Random"] -git-tree-sha1 = "2af69ff3c024d13bde52b34a2a7d6887d4e7b438" +git-tree-sha1 = "1130dbe1d5276cb656f6e1094ce97466ed700e5a" uuid = "626554b9-1ddb-594c-aa3c-2596fe9399a5" -version = "0.7.1" +version = "0.7.2" [[MicroCollections]] deps = ["BangBang", "InitialValues", "Setfield"] @@ -1033,18 +1028,18 @@ version = "0.1.3" [[Missings]] deps = ["DataAPI"] -git-tree-sha1 = "bf210ce90b6c9eed32d25dbcae1ebc565df2687f" +git-tree-sha1 = "f66bdc5de519e8f8ae43bdc598782d35a25b1272" uuid = "e1d29d7a-bbdc-5cf2-9ac0-f12de2c33e28" -version = "1.0.2" +version = "1.1.0" [[Mmap]] uuid = "a63ad114-7e13-5084-954f-fe012c677804" [[MosaicViews]] deps = ["MappedArrays", "OffsetArrays", "PaddedViews", "StackViews"] -git-tree-sha1 = "b34e3bc3ca7c94914418637cb10cc4d1d80d877d" +git-tree-sha1 = "7b86a5d4d70a9f5cdf2dacb3cbe6d251d1a61dbe" uuid = "e94cdb99-869f-56ef-bcf0-1ae2bcbe0389" -version = "0.3.3" +version = "0.3.4" [[MozillaCACerts_jll]] uuid = "14a3606d-f60d-562e-9121-12d972cd8159" @@ -1052,21 +1047,21 @@ version = "2022.2.1" [[MultivariateStats]] deps = ["Arpack", "LinearAlgebra", "SparseArrays", "Statistics", "StatsAPI", "StatsBase"] -git-tree-sha1 = "efe9c8ecab7a6311d4b91568bd6c88897822fabe" +git-tree-sha1 = "91a48569383df24f0fd2baf789df2aade3d0ad80" uuid = "6f286f6a-111f-5878-ab1e-185364afe411" -version = "0.10.0" +version = "0.10.1" [[NNlib]] -deps = ["Adapt", "ChainRulesCore", "LinearAlgebra", "Pkg", "Requires", "Statistics"] -git-tree-sha1 = "00bcfcea7b2063807fdcab2e0ce86ef00b8b8000" +deps = ["Adapt", "ChainRulesCore", "LinearAlgebra", "Pkg", "Random", "Requires", "Statistics"] +git-tree-sha1 = "33ad5a19dc6730d592d8ce91c14354d758e53b0e" uuid = "872c559c-99b0-510c-b3b7-b6c96a88d5cd" -version = "0.8.10" +version = "0.8.19" [[NaNMath]] deps = ["OpenLibm_jll"] -git-tree-sha1 = "a7c3d1da1189a1c2fe843a3bfa04d18d20eb3211" +git-tree-sha1 = "0877504529a3e5c3343c6f8b4c0381e57e4387e4" uuid = "77ba4419-2d1f-58cd-9bb1-8ffee604a2e3" -version = "1.0.1" +version = "1.0.2" [[NamedArrays]] deps = ["Combinatorics", "DataStructures", "DelimitedFiles", "InvertedIndices", "LinearAlgebra", "Random", "Requires", "SparseArrays", "Statistics"] @@ -1081,30 +1076,30 @@ version = "1.0.0" [[NearestNeighbors]] deps = ["Distances", "StaticArrays"] -git-tree-sha1 = "440165bf08bc500b8fe4a7be2dc83271a00c0716" +git-tree-sha1 = "2c3726ceb3388917602169bed973dbc97f1b51a8" uuid = "b8a86587-4115-5ab1-83bc-aa920d37bbce" -version = "0.4.12" +version = "0.4.13" [[Netpbm]] -deps = ["FileIO", "ImageCore"] -git-tree-sha1 = "18efc06f6ec36a8b801b23f076e3c6ac7c3bf153" +deps = ["FileIO", "ImageCore", "ImageMetadata"] +git-tree-sha1 = "5ae7ca23e13855b3aba94550f26146c01d259267" uuid = "f09324ee-3d7c-5217-9330-fc30815ba969" -version = "1.0.2" +version = "1.1.0" [[NetworkOptions]] uuid = "ca575930-c2e3-43a9-ace4-1e988b2c1908" version = "1.2.0" [[Observables]] -git-tree-sha1 = "5a9ea4b9430d511980c01e9f7173739595bbd335" +git-tree-sha1 = "6862738f9796b3edc1c09d0890afce4eca9e7e93" uuid = "510215fc-4207-5dde-b226-833fc4488ee2" -version = "0.5.2" +version = "0.5.4" [[OffsetArrays]] deps = ["Adapt"] -git-tree-sha1 = "f71d8950b724e9ff6110fc948dff5a329f901d64" +git-tree-sha1 = "82d7c9e310fe55aa54996e6f7f94674e2a38fcb4" uuid = "6fe1bfb0-de20-5000-8ca7-80f57d26f881" -version = "1.12.8" +version = "1.12.9" [[Ogg_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] @@ -1134,17 +1129,11 @@ deps = ["Artifacts", "Libdl"] uuid = "05823500-19ac-5b8b-9628-191a04bc5112" version = "0.8.1+0" -[[OpenSSL]] -deps = ["BitFlags", "Dates", "MozillaCACerts_jll", "OpenSSL_jll", "Sockets"] -git-tree-sha1 = "3c3c4a401d267b04942545b1e964a20279587fd7" -uuid = "4d8831e6-92b7-49fb-bdf8-b643e874388c" -version = "1.3.0" - [[OpenSSL_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] -git-tree-sha1 = "e60321e3f2616584ff98f0a4f18d98ae6f89bbb3" +git-tree-sha1 = "9ff31d101d987eb9d66bd8b176ac7c277beccd09" uuid = "458c3c95-2e84-50aa-8efc-19380b2a3a95" -version = "1.1.17+0" +version = "1.1.20+0" [[OpenSpecFun_jll]] deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Pkg"] @@ -1154,9 +1143,9 @@ version = "0.5.5+0" [[Optimisers]] deps = ["ChainRulesCore", "Functors", "LinearAlgebra", "Random", "Statistics"] -git-tree-sha1 = "8a9102cb805df46fc3d6effdc2917f09b0215c0b" +git-tree-sha1 = "e5a1825d3d53aa4ad4fb42bd4927011ad4a78c3d" uuid = "3bd65402-5787-11e9-1adc-39752487f4e2" -version = "0.2.10" +version = "0.2.15" [[Opus_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] @@ -1176,9 +1165,9 @@ version = "10.40.0+0" [[PDMats]] deps = ["LinearAlgebra", "SparseArrays", "SuiteSparse"] -git-tree-sha1 = "cf494dca75a69712a72b80bc48f59dcf3dea63ec" +git-tree-sha1 = "67eae2738d63117a196f497d7db789821bce61d1" uuid = "90014a1f-27ba-587c-ab20-58faa44d9150" -version = "0.11.16" +version = "0.11.17" [[PNGFiles]] deps = ["Base64", "CEnum", "ImageCore", "IndirectArrays", "OffsetArrays", "libpng_jll"] @@ -1199,10 +1188,10 @@ uuid = "d96e819e-fc66-5662-9728-84c9c7592b0a" version = "0.12.3" [[Parsers]] -deps = ["Dates"] -git-tree-sha1 = "6c01a9b494f6d2a9fc180a08b182fcb06f0958a0" +deps = ["Dates", "SnoopPrecompile"] +git-tree-sha1 = "478ac6c952fddd4399e71d4779797c538d0ff2bf" uuid = "69de0a69-1ddd-5017-9359-2bf0b02dc9f0" -version = "2.4.2" +version = "2.5.8" [[Pipe]] git-tree-sha1 = "6842804e7867b115ca9de748a0cf6b364523c16d" @@ -1234,15 +1223,15 @@ version = "3.1.0" [[PlotUtils]] deps = ["ColorSchemes", "Colors", "Dates", "Printf", "Random", "Reexport", "SnoopPrecompile", "Statistics"] -git-tree-sha1 = "21303256d239f6b484977314674aef4bb1fe4420" +git-tree-sha1 = "c95373e73290cf50a8a22c3375e4625ded5c5280" uuid = "995b91a9-d308-5afd-9ec6-746e21dbc043" -version = "1.3.1" +version = "1.3.4" [[Plots]] -deps = ["Base64", "Contour", "Dates", "Downloads", "FFMPEG", "FixedPointNumbers", "GR", "JLFzf", "JSON", "LaTeXStrings", "Latexify", "LinearAlgebra", "Measures", "NaNMath", "Pkg", "PlotThemes", "PlotUtils", "Printf", "REPL", "Random", "RecipesBase", "RecipesPipeline", "Reexport", "RelocatableFolders", "Requires", "Scratch", "Showoff", "SnoopPrecompile", "SparseArrays", "Statistics", "StatsBase", "UUIDs", "UnicodeFun", "Unzip"] -git-tree-sha1 = "0a56829d264eb1bc910cf7c39ac008b5bcb5a0d9" +deps = ["Base64", "Contour", "Dates", "Downloads", "FFMPEG", "FixedPointNumbers", "GR", "JLFzf", "JSON", "LaTeXStrings", "Latexify", "LinearAlgebra", "Measures", "NaNMath", "Pkg", "PlotThemes", "PlotUtils", "Preferences", "Printf", "REPL", "Random", "RecipesBase", "RecipesPipeline", "Reexport", "RelocatableFolders", "Requires", "Scratch", "Showoff", "SnoopPrecompile", "SparseArrays", "Statistics", "StatsBase", "UUIDs", "UnicodeFun", "Unzip"] +git-tree-sha1 = "cfcd24ebf8b066b4f8e42bade600c8558212ed83" uuid = "91a5bcdd-55d7-5caf-9e0b-520d859cae80" -version = "1.35.5" +version = "1.38.7" [[Preferences]] deps = ["TOML"] @@ -1251,10 +1240,10 @@ uuid = "21216c6a-2e73-6563-6e65-726566657250" version = "1.3.0" [[PrettyTables]] -deps = ["Crayons", "Formatting", "Markdown", "Reexport", "StringManipulation", "Tables"] -git-tree-sha1 = "460d9e154365e058c4d886f6f7d6df5ffa1ea80e" +deps = ["Crayons", "Formatting", "LaTeXStrings", "Markdown", "Reexport", "StringManipulation", "Tables"] +git-tree-sha1 = "96f6db03ab535bdb901300f88335257b0018689d" uuid = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d" -version = "2.1.2" +version = "2.2.2" [[Printf]] deps = ["Unicode"] @@ -1280,21 +1269,21 @@ version = "1.0.0" [[Qt5Base_jll]] deps = ["Artifacts", "CompilerSupportLibraries_jll", "Fontconfig_jll", "Glib_jll", "JLLWrappers", "Libdl", "Libglvnd_jll", "OpenSSL_jll", "Pkg", "Xorg_libXext_jll", "Xorg_libxcb_jll", "Xorg_xcb_util_image_jll", "Xorg_xcb_util_keysyms_jll", "Xorg_xcb_util_renderutil_jll", "Xorg_xcb_util_wm_jll", "Zlib_jll", "xkbcommon_jll"] -git-tree-sha1 = "c6c0f690d0cc7caddb74cef7aa847b824a16b256" +git-tree-sha1 = "0c03844e2231e12fda4d0086fd7cbe4098ee8dc5" uuid = "ea2cea3b-5b76-57ae-a6ef-0a8af62496e1" -version = "5.15.3+1" +version = "5.15.3+2" [[QuadGK]] deps = ["DataStructures", "LinearAlgebra"] -git-tree-sha1 = "97aa253e65b784fd13e83774cadc95b38011d734" +git-tree-sha1 = "786efa36b7eff813723c4849c90456609cf06661" uuid = "1fd47b50-473d-5c70-9696-f719f8f3bcdc" -version = "2.6.0" +version = "2.8.1" [[Quaternions]] -deps = ["LinearAlgebra", "Random"] -git-tree-sha1 = "fcebf40de9a04c58da5073ec09c1c1e95944c79b" +deps = ["LinearAlgebra", "Random", "RealDot"] +git-tree-sha1 = "da095158bdc8eaccb7890f9884048555ab771019" uuid = "94ee1d12-ae83-5a48-8b1c-48b8ff168ae0" -version = "0.6.1" +version = "0.7.4" [[REPL]] deps = ["InteractiveUtils", "Markdown", "Sockets", "Unicode"] @@ -1304,6 +1293,18 @@ uuid = "3fa0cd96-eef1-5676-8a61-b3b8758bbffb" deps = ["SHA", "Serialization"] uuid = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" +[[Random123]] +deps = ["Random", "RandomNumbers"] +git-tree-sha1 = "7a1a306b72cfa60634f03a911405f4e64d1b718b" +uuid = "74087812-796a-5b5d-8853-05524746bad3" +version = "1.6.0" + +[[RandomNumbers]] +deps = ["Random", "Requires"] +git-tree-sha1 = "043da614cc7e95c703498a491e2c21f58a2b8111" +uuid = "e6cf234a-135c-5ec9-84dd-332b85af5143" +version = "1.5.3" + [[RangeArrays]] git-tree-sha1 = "b9039e93773ddcfc828f12aadf7115b4b4d225f5" uuid = "b3c3ace0-ae52-54e7-9d0b-2c1406fd6b9d" @@ -1323,21 +1324,21 @@ version = "0.1.0" [[RecipesBase]] deps = ["SnoopPrecompile"] -git-tree-sha1 = "d12e612bba40d189cead6ff857ddb67bd2e6a387" +git-tree-sha1 = "261dddd3b862bd2c940cf6ca4d1c8fe593e457c8" uuid = "3cdcf5f2-1ef4-517c-9805-6587b60abb01" -version = "1.3.1" +version = "1.3.3" [[RecipesPipeline]] deps = ["Dates", "NaNMath", "PlotUtils", "RecipesBase", "SnoopPrecompile"] -git-tree-sha1 = "9b1c0c8e9188950e66fc28f40bfe0f8aac311fe0" +git-tree-sha1 = "e974477be88cb5e3040009f3767611bc6357846f" uuid = "01d81517-befc-4cb6-b9ec-a95719d0359c" -version = "0.6.7" +version = "0.6.11" [[RecursiveArrayTools]] -deps = ["Adapt", "ArrayInterfaceCore", "ArrayInterfaceStaticArraysCore", "ChainRulesCore", "DocStringExtensions", "FillArrays", "GPUArraysCore", "IteratorInterfaceExtensions", "LinearAlgebra", "RecipesBase", "StaticArraysCore", "Statistics", "Tables", "ZygoteRules"] -git-tree-sha1 = "fe25988dce8dd3b763cf39d0ca39b09db3571ff7" +deps = ["Adapt", "ArrayInterface", "ChainRulesCore", "DocStringExtensions", "FillArrays", "GPUArraysCore", "IteratorInterfaceExtensions", "LinearAlgebra", "RecipesBase", "Requires", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface", "Tables", "ZygoteRules"] +git-tree-sha1 = "3dcb2a98436389c0aac964428a5fa099118944de" uuid = "731186ca-8d62-57ce-b412-fbd966d074cd" -version = "2.32.1" +version = "2.38.0" [[Reexport]] git-tree-sha1 = "45e428421666073eab6f2da5c9d310d99bb12f9b" @@ -1364,27 +1365,27 @@ version = "1.3.0" [[Rmath]] deps = ["Random", "Rmath_jll"] -git-tree-sha1 = "bf3188feca147ce108c76ad82c2792c57abe7b1f" +git-tree-sha1 = "f65dcb5fa46aee0cf9ed6274ccbd597adc49aa7b" uuid = "79098fc4-a85e-5d69-aa6a-4863f24498fa" -version = "0.7.0" +version = "0.7.1" [[Rmath_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] -git-tree-sha1 = "68db32dff12bb6127bac73c209881191bf0efbb7" +git-tree-sha1 = "6ed52fdd3382cf21947b15e8870ac0ddbff736da" uuid = "f50d1b31-88e8-58de-be2c-1cc44531875f" -version = "0.3.0+0" +version = "0.4.0+0" [[Roots]] deps = ["ChainRulesCore", "CommonSolve", "Printf", "Setfield"] -git-tree-sha1 = "a3db467ce768343235032a1ca0830fc64158dadf" +git-tree-sha1 = "b45deea4566988994ebb8fb80aa438a295995a6e" uuid = "f2b01f46-fcfa-551c-844a-d8ac1e96c665" -version = "2.0.8" +version = "2.0.10" [[Rotations]] deps = ["LinearAlgebra", "Quaternions", "Random", "StaticArrays", "Statistics"] -git-tree-sha1 = "793b6ef92f9e96167ddbbd2d9685009e200eb84f" +git-tree-sha1 = "72a6abdcd088764878b473102df7c09bbc0548de" uuid = "6038ab10-8711-5258-84ad-4b1120ba62dc" -version = "1.3.3" +version = "1.4.0" [[RuntimeGeneratedFunctions]] deps = ["ExprTools", "SHA", "Serialization"] @@ -1397,10 +1398,16 @@ uuid = "ea8e919c-243c-51af-8825-aaa63cd721ce" version = "0.7.0" [[SciMLBase]] -deps = ["ArrayInterfaceCore", "CommonSolve", "ConstructionBase", "Distributed", "DocStringExtensions", "EnumX", "FunctionWrappersWrappers", "IteratorInterfaceExtensions", "LinearAlgebra", "Logging", "Markdown", "Preferences", "RecipesBase", "RecursiveArrayTools", "RuntimeGeneratedFunctions", "StaticArraysCore", "Statistics", "Tables"] -git-tree-sha1 = "3a396522ce4a81758cac1481bd596c3059a8e69c" +deps = ["ArrayInterface", "CommonSolve", "ConstructionBase", "Distributed", "DocStringExtensions", "EnumX", "FunctionWrappersWrappers", "IteratorInterfaceExtensions", "LinearAlgebra", "Logging", "Markdown", "Preferences", "RecipesBase", "RecursiveArrayTools", "Reexport", "RuntimeGeneratedFunctions", "SciMLOperators", "SnoopPrecompile", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface", "Tables", "TruncatedStacktraces"] +git-tree-sha1 = "fdea92555855db1d86c3638f0a789d6e0a830e67" uuid = "0bca4576-84f4-4d90-8ffe-ffa030f20462" -version = "1.66.0" +version = "1.89.0" + +[[SciMLOperators]] +deps = ["ArrayInterface", "DocStringExtensions", "Lazy", "LinearAlgebra", "Setfield", "SparseArrays", "StaticArraysCore", "Tricks"] +git-tree-sha1 = "e61e48ef909375203092a6e83508c8416df55a83" +uuid = "c0aeaf25-5076-4817-a8d5-81caf7dfa961" +version = "0.2.0" [[ScientificTypesBase]] git-tree-sha1 = "a8e18eb383b5ecf1b5e6fc237eb39255044fd92b" @@ -1409,24 +1416,24 @@ version = "3.0.0" [[Scratch]] deps = ["Dates"] -git-tree-sha1 = "f94f779c94e58bf9ea243e77a37e16d9de9126bd" +git-tree-sha1 = "30449ee12237627992a99d5e30ae63e4d78cd24a" uuid = "6c6a2e73-6563-6170-7368-637461726353" -version = "1.1.1" +version = "1.2.0" [[SentinelArrays]] deps = ["Dates", "Random"] -git-tree-sha1 = "efd23b378ea5f2db53a55ae53d3133de4e080aa9" +git-tree-sha1 = "77d3c4726515dca71f6d80fbb5e251088defe305" uuid = "91c51154-3ec4-41a3-a24f-3f23e20d615c" -version = "1.3.16" +version = "1.3.18" [[Serialization]] uuid = "9e88b42a-f829-5b0c-bbe9-9e923198166b" [[Setfield]] -deps = ["ConstructionBase", "Future", "MacroTools", "Requires"] -git-tree-sha1 = "38d88503f695eb0301479bc9b0d4320b378bafe5" +deps = ["ConstructionBase", "Future", "MacroTools", "StaticArraysCore"] +git-tree-sha1 = "e2cc6d8c88613c05e1defb55170bf5ff211fbeac" uuid = "efcf1570-3423-57d1-acb7-fd33fddbac46" -version = "0.8.2" +version = "1.1.1" [[SharedArrays]] deps = ["Distributed", "Mmap", "Random", "Serialization"] @@ -1438,22 +1445,22 @@ git-tree-sha1 = "91eddf657aca81df9ae6ceb20b959ae5653ad1de" uuid = "992d4aef-0814-514b-bc4d-f2e9a6c4116f" version = "1.0.3" -[[SimpleBufferStream]] -git-tree-sha1 = "874e8867b33a00e784c8a7e4b60afe9e037b74e1" -uuid = "777ac1f9-54b0-4bf8-805c-2214025038e7" -version = "1.1.0" - [[SimpleTraits]] deps = ["InteractiveUtils", "MacroTools"] git-tree-sha1 = "5d7e3f4e11935503d3ecaf7186eac40602e7d231" uuid = "699a6c99-e7fa-54fc-8d76-47d257e15c1d" version = "0.9.4" +[[SimpleUnPack]] +git-tree-sha1 = "46dc21a1bf27b751453f7dea36786c006707f0d4" +uuid = "ce78b400-467f-4804-87d8-8f486da07d0a" +version = "1.0.1" + [[SimpleWeightedGraphs]] deps = ["Graphs", "LinearAlgebra", "Markdown", "SparseArrays", "Test"] -git-tree-sha1 = "a6f404cc44d3d3b28c793ec0eb59af709d827e4e" +git-tree-sha1 = "7d0b07df35fccf9b866a94bcab98822a87a3cb6f" uuid = "47aef6b3-ad0c-573a-a1e2-d07658019622" -version = "1.2.1" +version = "1.3.0" [[Sixel]] deps = ["Dates", "FileIO", "ImageCore", "IndirectArrays", "OffsetArrays", "REPL", "libsixel_jll"] @@ -1462,18 +1469,19 @@ uuid = "45858cf5-a6b0-47a3-bbea-62219f50df47" version = "0.1.2" [[SnoopPrecompile]] -git-tree-sha1 = "f604441450a3c0569830946e5b33b78c928e1a85" +deps = ["Preferences"] +git-tree-sha1 = "e760a70afdcd461cf01a575947738d359234665c" uuid = "66db9d55-30c0-4569-8b51-7e840670fc0c" -version = "1.0.1" +version = "1.0.3" [[Sockets]] uuid = "6462fe0b-24de-5631-8697-dd941f90decc" [[SortingAlgorithms]] deps = ["DataStructures"] -git-tree-sha1 = "b3363d7460f7d098ca0912c69b082f75625d7508" +git-tree-sha1 = "a4ada03f999bd01b3a25dcaa30b2d929fe537e00" uuid = "a2af1166-a08f-5f64-846c-94a0d3cef48c" -version = "1.0.1" +version = "1.1.0" [[SparseArrays]] deps = ["LinearAlgebra", "Random"] @@ -1481,9 +1489,9 @@ uuid = "2f01184e-e22b-5df5-ae63-d93ebab69eaf" [[SpecialFunctions]] deps = ["ChainRulesCore", "IrrationalConstants", "LogExpFunctions", "OpenLibm_jll", "OpenSpecFun_jll"] -git-tree-sha1 = "d75bda01f8c31ebb72df80a46c88b25d1c79c56d" +git-tree-sha1 = "ef28127915f4229c971eb43f3fc075dd3fe91880" uuid = "276daf66-3868-5448-9aa4-cd146d93841b" -version = "2.1.7" +version = "2.2.0" [[SplittablesBase]] deps = ["Setfield", "Test"] @@ -1499,9 +1507,9 @@ version = "0.1.1" [[StaticArrays]] deps = ["LinearAlgebra", "Random", "StaticArraysCore", "Statistics"] -git-tree-sha1 = "f86b3a049e5d05227b10e15dbb315c5b90f14988" +git-tree-sha1 = "2d7d9e1ddadc8407ffd460e24218e37ef52dd9a3" uuid = "90137ffa-7385-5640-81b9-e52037218182" -version = "1.5.9" +version = "1.5.16" [[StaticArraysCore]] git-tree-sha1 = "6b7ba252635a5eff6a0b0664a41ee140a1c9e72a" @@ -1532,9 +1540,9 @@ version = "0.33.21" [[StatsFuns]] deps = ["ChainRulesCore", "HypergeometricFunctions", "InverseFunctions", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"] -git-tree-sha1 = "5783b877201a82fc0014cbf381e7e6eb130473a4" +git-tree-sha1 = "f625d686d5a88bcd2b15cd81f18f98186fdc0c9a" uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c" -version = "1.0.1" +version = "1.3.0" [[StatsPlots]] deps = ["AbstractFFTs", "Clustering", "DataStructures", "DataValues", "Distributions", "Interpolations", "KernelDensity", "LinearAlgebra", "MultivariateStats", "NaNMath", "Observables", "Plots", "RecipesBase", "RecipesPipeline", "Reexport", "StatsBase", "TableOperations", "Tables", "Widgets"] @@ -1548,15 +1556,21 @@ uuid = "892a3eda-7b42-436c-8928-eab12a02cf0e" version = "0.3.0" [[StructArrays]] -deps = ["Adapt", "DataAPI", "StaticArraysCore", "Tables"] -git-tree-sha1 = "13237798b407150a6d2e2bce5d793d7d9576e99e" +deps = ["Adapt", "DataAPI", "GPUArraysCore", "StaticArraysCore", "Tables"] +git-tree-sha1 = "b03a3b745aa49b566f128977a7dd1be8711c5e71" uuid = "09ab397b-f2b6-538f-b94a-2f83cf4a842a" -version = "0.6.13" +version = "0.6.14" [[SuiteSparse]] deps = ["Libdl", "LinearAlgebra", "Serialization", "SparseArrays"] uuid = "4607b0f0-06f3-5cda-b6b1-a6196a1729e9" +[[SymbolicIndexingInterface]] +deps = ["DocStringExtensions"] +git-tree-sha1 = "f8ab052bfcbdb9b48fad2c80c873aa0d0344dfe5" +uuid = "2efcf032-c050-4f8e-a9bb-153293bab1f5" +version = "0.2.2" + [[TOML]] deps = ["Dates"] uuid = "fa267f1f-6049-4f14-aa54-33bafae1ed76" @@ -1603,9 +1617,9 @@ uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40" [[TiffImages]] deps = ["ColorTypes", "DataStructures", "DocStringExtensions", "FileIO", "FixedPointNumbers", "IndirectArrays", "Inflate", "Mmap", "OffsetArrays", "PkgVersion", "ProgressMeter", "UUIDs"] -git-tree-sha1 = "70e6d2da9210371c927176cb7a56d41ef1260db7" +git-tree-sha1 = "7e6b0e3e571be0b4dd4d2a9a3a83b65c04351ccc" uuid = "731e570b-9d59-4bfa-96dc-6df516fadf69" -version = "0.6.1" +version = "0.6.3" [[TiledIteration]] deps = ["OffsetArrays"] @@ -1615,32 +1629,43 @@ version = "0.3.1" [[Tracker]] deps = ["Adapt", "DiffRules", "ForwardDiff", "Functors", "LinearAlgebra", "LogExpFunctions", "MacroTools", "NNlib", "NaNMath", "Optimisers", "Printf", "Random", "Requires", "SpecialFunctions", "Statistics"] -git-tree-sha1 = "d963aad627fd7af56fbbfee67703c2f7bfee9dd7" +git-tree-sha1 = "77817887c4b414b9c6914c61273910e3234eb21c" uuid = "9f7883ad-71c0-57eb-9f7f-b5c9e6d3789c" -version = "0.2.22" +version = "0.2.23" [[TranscodingStreams]] deps = ["Random", "Test"] -git-tree-sha1 = "8a75929dcd3c38611db2f8d08546decb514fcadf" +git-tree-sha1 = "94f38103c984f89cf77c402f2a68dbd870f8165f" uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa" -version = "0.9.9" +version = "0.9.11" [[Transducers]] deps = ["Adapt", "ArgCheck", "BangBang", "Baselet", "CompositionsBase", "DefineSingletons", "Distributed", "InitialValues", "Logging", "Markdown", "MicroCollections", "Requires", "Setfield", "SplittablesBase", "Tables"] -git-tree-sha1 = "77fea79baa5b22aeda896a8d9c6445a74500a2c2" +git-tree-sha1 = "c42fa452a60f022e9e087823b47e5a5f8adc53d5" uuid = "28d57a85-8fef-5791-bfe6-a80928e7c999" -version = "0.4.74" +version = "0.4.75" + +[[Tricks]] +git-tree-sha1 = "6bac775f2d42a611cdfcd1fb217ee719630c4175" +uuid = "410a4b4d-49e4-4fbc-ab6d-cb71b17b3775" +version = "0.1.6" + +[[TruncatedStacktraces]] +deps = ["InteractiveUtils", "MacroTools"] +git-tree-sha1 = "f7057ba94e63b269125c0db75dcdef913d956351" +uuid = "781d530d-4396-4725-bb49-402e4bee1e77" +version = "1.1.0" [[Turing]] -deps = ["AbstractMCMC", "AdvancedHMC", "AdvancedMH", "AdvancedPS", "AdvancedVI", "BangBang", "Bijectors", "DataStructures", "Distributions", "DistributionsAD", "DocStringExtensions", "DynamicPPL", "EllipticalSliceSampling", "ForwardDiff", "Libtask", "LinearAlgebra", "LogDensityProblems", "MCMCChains", "NamedArrays", "Printf", "Random", "Reexport", "Requires", "SciMLBase", "SpecialFunctions", "Statistics", "StatsBase", "StatsFuns", "Tracker"] -git-tree-sha1 = "68fb67dab0c11de2bb1d761d7a742b965a9bc875" +deps = ["AbstractMCMC", "AdvancedHMC", "AdvancedMH", "AdvancedPS", "AdvancedVI", "BangBang", "Bijectors", "DataStructures", "Distributions", "DistributionsAD", "DocStringExtensions", "DynamicPPL", "EllipticalSliceSampling", "ForwardDiff", "Libtask", "LinearAlgebra", "LogDensityProblems", "LogDensityProblemsAD", "MCMCChains", "NamedArrays", "Printf", "Random", "Reexport", "Requires", "SciMLBase", "Setfield", "SpecialFunctions", "Statistics", "StatsBase", "StatsFuns", "Tracker"] +git-tree-sha1 = "c839c49b5907233e98997d561c809e619cbe58d0" uuid = "fce5fe82-541a-59a6-adf8-730c64b5f9a0" -version = "0.21.12" +version = "0.24.2" [[URIs]] -git-tree-sha1 = "e59ecc5a41b000fa94423a578d29290c7266fc10" +git-tree-sha1 = "074f993b0ca030848b897beff716d93aca60f06a" uuid = "5c2747f8-b7ea-4ff2-ba2e-563bfd36b1d4" -version = "1.4.0" +version = "1.4.2" [[UUIDs]] deps = ["Random", "SHA"] @@ -1667,9 +1692,9 @@ version = "0.2.0" [[Wayland_jll]] deps = ["Artifacts", "Expat_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg", "XML2_jll"] -git-tree-sha1 = "3e61f0b86f90dacb0bc0e73a0c5a83f6a8636e23" +git-tree-sha1 = "ed8d92d9774b077c53e1da50fd81a36af3744c1c" uuid = "a2964d1f-97da-50d4-b82a-358c7fce9d89" -version = "1.19.0+0" +version = "1.21.0+0" [[Wayland_protocols_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] @@ -1691,9 +1716,9 @@ version = "0.5.5" [[XML2_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "Zlib_jll"] -git-tree-sha1 = "58443b63fb7e465a8a7210828c91c08b92132dff" +git-tree-sha1 = "93c41695bc1c08c46c5899f4fe06d6ead504bb73" uuid = "02c8fc9c-b97f-50b9-bbe4-9be30ff0a78a" -version = "2.9.14+0" +version = "2.10.3+0" [[XSLT_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgcrypt_jll", "Libgpg_error_jll", "Libiconv_jll", "Pkg", "XML2_jll", "Zlib_jll"] @@ -1833,10 +1858,10 @@ uuid = "83775a58-1f1d-513f-b197-d71354ab007a" version = "1.2.12+3" [[Zstd_jll]] -deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] -git-tree-sha1 = "e45044cd873ded54b6a5bac0eb5c971392cf1927" +deps = ["Artifacts", "JLLWrappers", "Libdl"] +git-tree-sha1 = "c6edfe154ad7b313c01aceca188c05c835c67360" uuid = "3161d3a3-bdf6-5164-811a-617609db77b4" -version = "1.5.2+0" +version = "1.5.4+0" [[ZygoteRules]] deps = ["MacroTools"] diff --git a/docs/404.html b/docs/404.html index 3850d081..8970d776 100644 --- a/docs/404.html +++ b/docs/404.html @@ -1,25 +1,24 @@ - Page not found | Data Science in Julia for Hackers - + - + - + - + - + @@ -27,282 +26,99 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -314,335 +130,166 @@
@@ -659,12 +306,12 @@

-
-

Page not found

-

The page you requested cannot be found (perhaps it was moved or renamed).

-

You may want to try searching to find the page's new location, or use - the table of contents to find the page you are looking for.

-
+
+

Page not found

+

The page you requested cannot be found (perhaps it was moved or renamed).

+

You may want to try searching to find the page's new location, or use +the table of contents to find the page you are looking for.

+
@@ -674,71 +321,71 @@

Page not found

- - - - - - - - - - - - + + + + + + + + + + + + - \ No newline at end of file + diff --git a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_10-J1.png b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_10-J1.png index cc2e9e74..74546da7 100644 Binary files a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_10-J1.png and b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_10-J1.png differ diff --git a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_11-J1.png b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_11-J1.png index 4bc380bc..a7a37bf1 100644 Binary files a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_11-J1.png and b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_11-J1.png differ diff --git a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_13-J1.png b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_13-J1.png index 21196a4a..e2649541 100644 Binary files a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_13-J1.png and b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_13-J1.png differ diff --git a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_14-J1.png b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_14-J1.png index 37edb78c..6425fa73 100644 Binary files a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_14-J1.png and b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_14-J1.png differ diff --git a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_15-J1.png b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_15-J1.png index 02735c83..2d43c196 100644 Binary files a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_15-J1.png and b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_15-J1.png differ diff --git a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_6-J1.png b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_6-J1.png index 15836cdf..896cf230 100644 Binary files a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_6-J1.png and b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_6-J1.png differ diff --git a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_7-J1.png b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_7-J1.png index 358519fd..d8e8ed45 100644 Binary files a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_7-J1.png and b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_7-J1.png differ diff --git a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_8-J1.png b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_8-J1.png index e03a6404..d61ab5e1 100644 Binary files a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_8-J1.png and b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_8-J1.png differ diff --git a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_9-J1.png b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_9-J1.png index a88225b2..bfe76742 100644 Binary files a/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_9-J1.png and b/docs/data_science_in_julia_for_hackers_files/figure-html/chap_6_plot_9-J1.png differ diff --git a/docs/escaping-from-mars.html b/docs/escaping-from-mars.html index e213e177..3eb7098d 100644 --- a/docs/escaping-from-mars.html +++ b/docs/escaping-from-mars.html @@ -6,7 +6,7 @@ Chapter 6 Escaping from Mars | Data Science in Julia for Hackers - + @@ -138,7 +138,7 @@ -
  • 1 Science technology and epistemology +
  • 1 Science, technology and epistemology
  • 9.3 Maximizing profit
  • 9.4 Summary
  • -
  • 9.5 References
  • +
  • 9.5 References
  • 10 Image classification
  • 11 Ultima online
  • 11.2 Summary
  • -
  • 11.3 References
  • +
  • 11.3 References
  • 12 Ultima continued
  • 12.3 Summary
  • -
  • 12.4 References
  • +
  • 12.4 References
  • 13 Time series
  • 13.3 Summary
  • -
  • 13.4 References
  • +
  • 13.4 References
  • Published with bookdown
  • @@ -308,18 +316,26 @@

    Chapter 6 Escaping from Mars

    -

    To simplify, we approximate the escape velocity as:

    +

    Assuming that the planet is spherical and considering only the gravitational effect of Mars, the escape velocity can be described as

    \(v_{escape}=\sqrt{2*g_{planet}*r_{planet}}\)

    -

    where \(r\) is the radius of the planet and \(g\) the constant of gravity at the surface. Suppose that we remember from school that the escape velocity from Earth is \(11\frac{km}{s}\) and that the radius of Mars if half of the earth’s.

    -

    We remember that the gravity of Earth at its surface is \(9.8\frac{m}{s^2}\), so all we need to estimate the escape velocity of Mars is the gravity of the planet at its surface. So we decide to make an experiment and gather some data. But what exactly do you need to measure? Let’s see.

    -

    We are going to calculate the constant \(g_{mars}\) just throwing stones. We are going to explain a bit the equations regarding the experiment. The topic we need to revisit is Proyectile Motion.

    -

    ## Proyectile Motion

    +

    where \(r\) is the radius of the planet and \(g\) the gravity constant at the surface of Mars. Suppose that we remember from school that the +escape velocity from Earth is \(11\frac{km}{s}\) and that the radius of Mars if half of the earth’s.

    +

    We remember that the gravity of Earth at its surface is \(9.8\frac{m}{s^2}\), so all we need to estimate the escape velocity of Mars is the gravity +of the planet at its surface. So we decide to make an experiment and gather some data. But what exactly do you need to measure? Let’s see.

    +

    We are going to calculate the constant \(g_{mars}\) just throwing stones. We are going to explain a bit the equations regarding the experiment. +The topic we need to revisit is Proyectile Motion.

    +
    +

    6.1 Proyectile Motion

    Gravity pulls us down to Earth, or in our case, to Mars. This means that we have an acceletation, since there is a force. Recalling the newton equation:

    \(\overrightarrow{F} = m * \overrightarrow{a}\)

    -

    where \(m\) is the mass of the object, \(\overrightarrow{F}\) is the force(it’s what make us fall) and \(\overrightarrow{a}\) is the acceleration, in our case is what we call gravity \(\overrightarrow{g}\). -The arrow \(\overrightarrow{}\) over the letter means that the quantity has a direction in space, in our case, gravity is pointing to the center of the Earth, or Mars.

    +

    where \(m\) is the mass of the object, \(\overrightarrow{F}\) is the force(it’s what make us fall) and \(\overrightarrow{a}\) is the acceleration, in +our case is what we call gravity \(\overrightarrow{g}\). +The arrow \(\overrightarrow{}\) over the letter means that the quantity has a direction in space, in our case, gravity is pointing to the center of +the Earth, or Mars.

    How can we derive the motion of the stones with that equation?

    -

    In the figure below we show a sketch of the problem: We have the 2 axis, \(x\) and \(y\), the \(x\) normally is parallel to the ground and the \(y\) axis is perpendicular, pointing to the sky. We also draw the initial velocity \(v_0\) of the proyectile, and the angle \(\theta\) with respect to the ground. Also it’s important to notice that the gravity points in the opposite direction of the \(y\) axis.

    +

    In the figure below we show a sketch of the problem: We have the 2 axis, \(x\) and \(y\), the \(x\) axis is parallel to the ground and the \(y\) axis +is perpendicular, pointing to the sky. We also draw the initial velocity \(v_0\) of the proyectile, and the angle \(\theta\) with respect to the ground. +Also it’s important to notice that the gravity points in the opposite direction of the \(y\) axis.

    But what is the trajectory of the projectile? and how do the coordinates x and y evolve with time?

    If we remember from school, the equations of \(x\) and \(y\) over time is:

    @@ -327,57 +343,71 @@

    Chapter 6 Escaping from Mars\(y(t) = v_0*t*sin(θ) -\frac{g*t^2}{2}\) where \(t\) is the time at which we want to know the coordinates.

    What do these equations tell us?

    -

    If we see the evolution of the projectile in the \(x\) axis only, it follows a straight line (until it hits the ground) and in the \(y\) axis the movement follows a parabola, but how do we interpret that?

    -

    We can imagine what happens if we trow a stone to the sky: the stone starts to go up and then, at some point, it reaches the highest position it can go. Then, the stone starts to go down.

    +

    If we see the evolution of the projectile in the \(x\) axis only, it follows a straight line (until it hits the ground) and in the \(y\) axis the +movement follows a parabola, but how do we interpret that?

    +

    We can imagine what happens if we trow a stone to the sky: the stone starts to go up and then, at some point, it reaches the highest position it +can go. Then, the stone starts to go down.

    How does the velocity evolve in this trajectory?

    -

    Since the begining, the velocity starts decreasing until it has the value of 0 at the highest point, where the stone stops for a moment, then it changes its direction and start to increase again, pointing towards the ground. Ploting the evolution of the height of the stone, we obtain the plot shown below. We see that, at the begining the stone starts to go up fast and then it slows down. We see that for each value of \(y\) there are 2 values of \(t\) that satisfies the equation, thats because the stone pass twice for each point, except for the highest value of \(y\).

    +

    Since the begining, the velocity starts decreasing until it has the value of 0 at the highest point, where the stone stops for a moment, then it +changes its direction and start to increase again, pointing towards the ground. Ploting the evolution of the height of the stone, we obtain the plot shown below. We see that, at the begining the stone starts to go up fast and then it slows down. We see that for each value of \(y\) there are 2 values of \(t\) that satisfies the equation, thats because the stone pass twice for each point, except for the highest value of \(y\).

    -

    So, in the example we ahve just explained, we have that the throwing angle is θ=90°, so sin(90°)=1, the trajectory in \(y\) becomes:

    +

    So, in the example we have just explained, we have that the throwing angle is θ=90°, so sin(90°)=1, the trajectory in \(y\) becomes:

    \(y(t) = v_0*t -\frac{g*t^2}{2}\) And the velocity, which is the derivative of the above equation becomes: \(v_{y}(t) = v_{0} -g*t\)

    Those two equations are the ones plotted in the previous sketch, a parabola and a straight line that decreases with time. -It’s worth to notice that at each value \(y\) of the trajectory, the velocity could have 2 values, just differing in its sign, meaning it can has 2 directions, but with the same magnitude. So keep in mind that when you throw an object to the sky, when it returns to you, the velocity will be the same with the one you threw it.

    -
    -

    6.1 Calculating the constant g of Mars

    +It’s worth to notice that at each value \(y\) of the trajectory, the velocity could have 2 values, just differing in its sign, meaning it can has +2 directions, but with the same magnitude. So keep in mind that when you throw an object to the sky, when it returns to you, the velocity will +be the same with the one you threw it.

    +
    +
    +

    6.2 Calculating the constant g of Mars

    Now that we have understood the equations we will work with, we ask:

    how do we set the experiment and what do we need to measure?

    The experiment set up will go like this: - One person will be throwing stones with an angle. -- The other person will be far, watching from some distance, measuring the time since the other throw the stone and it hits the ground. The other measurement we will need is the distance Δx the stone travelled. -- Also, for the first iteration of the experiment, suppose we only keep the measurements with and initial angle θ~45° (we will loosen this constrain in a bit).

    +- The other person will be far, watching from some distance, measuring the time since the other throw the stone and it hits the ground. The other +measurement we will need is the distance Δx the stone travelled. +- Also, for the first iteration of the experiment, suppose we only keep the measurements with an initial angle θ~45° (we will loosen this +constrain in a bit).

    Let’s use the computer to help with all these calculations. As always, we first import the needed libraries

    using Distributions
     using StatsPlots
     using Turing

    Suppose we did the experiment and we have measured then the 5 points, Δx and Δt, shown below:

    -
    Δx_measured = [25.94, 38.84, 52.81, 45.54, 17.24]
    -t_measured = [3.91, 4.57, 5.43, 4.85, 3.15]
    +
    Δx = [25.94, 38.84, 52.81, 45.54, 17.24]
    +Δt = [3.91, 4.57, 5.43, 4.85, 3.15]

    Now, how do we estimate the constant g from those points?

    -

    Using the equations of the trajectory, when the stone hits the ground, \(y(t) = 0\), since we take the start of the \(y\) coordinate in the ground (negleting the initial height with respect to the maximum height), so finding the other then the initial point that fulfill this equation, we find that:

    +

    When the stone hits the ground, we have that \(y(t) = 0\). If we solve for \(t\), ignoring the trivial solution \(t = 0\), we find that

    \(t_{f} = \frac{2*v_{0}*sin(θ)}{g}\)

    where \(t_{f}\) is the time at which the stone hits the ground, the time we have measured. And replacing this time in the x(t) equation we find that:

    \(Δx=t_{f}*v_{0}*cos(θ)\)

    where Δx is the distance traveled by the stone.

    -

    So, solving for \(v_{0}\)m, the initial velocity, an unknown quantity, we have:

    +

    So, solving for \(v_{0}\), the initial velocity, an unknown quantity, we have:

    \(v_{0}=\frac{Δx}{t_{f}cos(θ)}\)

    Then replacing it in the equation of \(t_{f}\) and solving for \(\Delta x\) we obtain:

    \(Δx=\frac{g*t_{f}^2}{2*tg(θ)}\)

    So, the model we are going to propose is a linear regression. A linear equation has the form:

    \(y=m*x +b\)

    -

    where \(m\) is the slope of the curve and \(b\) is called the intercept. In our case, if we take \(x\) to be \(\frac{t_{f}^2}{2}\), the slope of the curve is g and the intercep is 0. So, in our linear model we are going to propose that each point in the curve is:

    +

    where \(m\) is the slope of the curve and \(b\) is called the intercept. In our case, if we take \(x\) to be \(\frac{t_{f}^2}{2}\), the slope of the +curve is g and the intercep is 0. So, in our linear model we are going to propose that each point in the curve is:

    \(\mu = m*x + b\)

    \(y \sim Normal(\mu,\sigma^2)\)

    -

    So, what this says is that each point of the regresion is drawn from a gaussian distribution with its center correnponding with a point in the line, as shown in the plot below.

    +

    So, what this says is that each point of the regresion is drawn from a gaussian distribution with its center correnponding with a point in the +line, as shown in the plot below.

    So, our linear model will be:

    \(g \sim Distribution\ to\ be\ proposed\)

    \(\mu[i] = g*\frac{t_{f}^2[i]}{2}\)

    \(\Delta x[i]= Normal(\mu[i],\sigma^2)\)

    -

    Where \(g\) has a distribution we will propose next. The first distribution we are going to propose is a uniform distribution for g, between the values of 0 and 10

    -
    plot(Uniform(0,10),xlim=(-1,11), ylim=(0,0.2), legend=false, title="Uniform prior distribution for g", xlabel="g_mars", ylabel="Probability", fill=(0, .5,:lightblue))
    +

    Where \(g\) has a distribution we will propose next. The first distribution we are going to propose is a uniform distribution for g, between the +values of 0 and 10

    +
    plot(Uniform(0,10),xlim=(-1,11), ylim=(0,0.2), legend=false, fill=(0, .5, :lightblue));
    +xlabel!("g_mars");
    +ylabel!("Probability density");
    +title!("Uniform prior distribution for g")

    Now we define the model in Turing and sample from the posterior distribution.

    @model gravity_uniform(t_final, x_final, θ) = begin
    @@ -391,113 +421,112 @@ 

    6.1 Calculating the constant g of end end

    iterations = 10000
    -ϵ = 0.05
    -τ = 10
    -
    θ = 45
    -chain_uniform = sample(gravity_uniform(t_measured, Δx_measured, θ), HMC(ϵ, τ), iterations, progress=false);
    +θ = 45 +chain_uniform = sample(gravity_uniform(Δt, Δx, θ), NUTS(), iterations, progress=false);

    Plotting the posterior distribution for p we that the values are mostly between 2 and 5, with the maximun near 3,8. Can we narrow the values we obtain?

    -
    histogram(chain_uniform[:g], xlim=[1,6], legend=false, normalized=true);
    -xlabel!("g_mars");
    -ylabel!("Probability");
    -title!("Posterior distribution for g with Uniform prior")
    +
    histogram(chain_uniform[:g], xlim=[1,6], legend=false, normalized=true);
    +xlabel!("g_mars");
    +ylabel!("Probability density");
    +title!("Posterior distribution for g with Uniform prior")

    -

    As a second option, we propose a Gaussian distribution instead of a uniforn distribution for \(g\), like the one shown below, with a mean of 5 and a variance of 2, and let the model update its beliefs with the points we have.

    -
    plot(Normal(5,2), legend=false, fill=(0, .5,:lightblue));
    -xlabel!("g_mars");
    -ylabel!("Probability");
    -title!("Normal prior distribution for g")
    -

    -

    We define then the model with a gaussian distribution as a prior for \(g\):

    -
    @model gravity_normal(t_final, x_final, θ) = begin
    -    # The number of observations.
    -    N = length(t_final)
    -    g ~ Normal(6,2)
    -    μ = g .* 1/2 .* t_final.^2
    -        
    -    for n in 1:N
    -        x_final[n] ~ Normal(μ[n], 3)
    -    end
    -end
    +

    As a second option, we propose a Gaussian distribution instead of a Uniform distribution for \(g\), like the one shown below, with a mean of 5 and a +variance of 2, and let the model update its beliefs with the points we have.

    +
    plot(Normal(5,2), legend=false, fill=(0, .5,:lightblue));
    +xlabel!("g_mars");
    +ylabel!("Probability density");
    +title!("Normal prior distribution for g")
    +

    +We define then the model with a gaussian distribution as a prior for \(g\):

    +
    @model gravity_normal(t_final, x_final, θ) = begin
    +    # The number of observations.
    +    N = length(t_final)
    +    g ~ Normal(6,2)
    +    μ = g .* 1/2 .* t_final.^2
    +        
    +    for n in 1:N
    +        x_final[n] ~ Normal(μ[n], 3)
    +    end
    +end

    Now we sample values from the posterior distribution and plot and histogram with the values obtained:

    -
    chain_normal = sample(gravity_normal(t_measured, Δx_measured, θ), HMC(ϵ, τ), iterations, progress=false)
    -
    histogram(chain_normal[:g], xlim=[3,4.5], legend=false, normalized=true, title="Posterior distribution for g with Normal prior");
    -xlabel!("g_mars");
    -ylabel!("Probability");
    -title!("Posterior distribution for g with Normal prior")
    +
    chain_normal = sample(gravity_normal(Δt, Δx, θ), NUTS(), iterations, progress=false)
    +
    histogram(chain_normal[:g], xlim=[3,4.5], legend=false, normalized=true, title="Posterior distribution for g with Normal prior");
    +xlabel!("g_mars");
    +ylabel!("Probability density");
    +title!("Posterior distribution for g with Normal prior")

    We see that the plausible values for the gravity have a clear center in 3.7 and now the distribution is narrower, that’s good, but we can do better.

    If we observe the prior distribution proposed for \(g\), we see that some values are negative, which makes no sense because if that would the case when you trow the stone, it would go up and up, escaping from the planet.

    We propose then a new model for not allowing the negative values to happen. The distribution we are interested in is a LogNormal distribution. In the plot below is the prior distribution for g, a LogNormal distribution with mean 1.5 and variance of 0.5.

    -
    plot(LogNormal(1,0.5), xlim=(0,10), legend=false, fill=(0, .5,:lightblue));
    -xlabel!("g_mars");
    -ylabel!("Probability");
    -title!("Prior LogNormal distribution for g")
    +
    plot(LogNormal(1,0.5), xlim=(0,10), legend=false, fill=(0, .5,:lightblue));
    +xlabel!("g_mars");
    +ylabel!("Probability density");
    +title!("Prior LogNormal distribution for g")

    The model gravity_lognormal defined below has now a LogNormal prior. We sample the posterior distribution after updating with the data measured.

    -
    @model gravity_lognormal(t_final, x_final, θ) = begin
    -    # The number of observations.
    -    N = length(t_final)
    -    g ~ LogNormal(0.5,0.5)
    -    μ = g .* 1/2 .* t_final.^2
    -
    -    for n in 1:N
    -        x_final[n] ~ Normal(μ[n], 3)
    -    end
    -end
    -
    chain_lognormal = sample(gravity_lognormal(t_measured, Δx_measured, θ), HMC(ϵ, τ), iterations, progress=false)
    -
    histogram(chain_lognormal[:g], xlim=[3,4.5], legend=false, title="Posterior distribution for g with LogNormal prior", normalized=true);
    -xlabel!("g_mars");
    -ylabel!("Probability");
    -title!("Posterior distribution for g with LogNormal prior")
    +
    @model gravity_lognormal(t_final, x_final, θ) = begin
    +    # The number of observations.
    +    N = length(t_final)
    +    g ~ LogNormal(0.5,0.5)
    +    μ = g .* 1/2 .* t_final.^2
    +
    +    for n in 1:N
    +        x_final[n] ~ Normal(μ[n], 3)
    +    end
    +end
    +
    chain_lognormal = sample(gravity_lognormal(Δt, Δx, θ), NUTS(), iterations, progress=false)
    +
    histogram(chain_lognormal[:g], xlim=[3,4.5], legend=false, title="Posterior distribution for g with LogNormal prior", normalized=true);
    +xlabel!("g_mars");
    +ylabel!("Probability density");
    +title!("Posterior distribution for g with LogNormal prior")

    -
    -

    6.2 Optimizing the throwing angle

    +
    +

    6.3 Optimizing the throwing angle

    Now that we have a good understanding of the equations and the overall problem, we are going to add some difficulties and we will loosen a constrain we have imposed: Suppose that the device employed to measure the angle has an error of 15°, no matter the angle.

    We want to know what are the most convenient angle to do the experiment and to measure or if it doesn’t matter.

    To do the analysis we need to see how the angle influence the computation of \(g\), so solving the equation for \(g\) we have:

    \(g = \frac{2*tg(\theta)*\Delta x}{t^{2}_f}\)

    We can plot then the tangent of θ, with and error of 15° and see what is its maximum and minimun value:

    -
    angles = 0:0.1:70
    -error = 15/2
    -μ = tan.(deg2rad.(angles))
    -ribbon = tan.(deg2rad.(angles .+ error)) - μ
    -
    plot(angles, μ, ribbon=ribbon, color="lightblue", legend=false);
    -xlabel!("θ [deg]");
    -ylabel!("tan(θ)");
    -title!("tan(θ) and its error")
    +
    angles = 0:0.1:70
    +error = 15/2
    +μ = tan.(deg2rad.(angles))
    +ribbon = tan.(deg2rad.(angles .+ error)) - μ
    +
    plot(angles, μ, ribbon=ribbon, color="lightblue", legend=false);
    +xlabel!("θ [deg]");
    +ylabel!("tan(θ)");
    +title!("tan(θ) and its error")

    But we don’t care about the absolute value of the error, we want the relavite error, so plotting the percentual error we have:

    -
    err = tan.(deg2rad.(angles .+ error)) .- tan.(deg2rad.(angles .- error))
    -perc_error = err .* 100 ./ μ
    -
    plot(angles, perc_error, xlim=(5,70), ylim=(0,200), color="lightblue", legend=true, lw=3, label="Percentual error");
    -xlabel!("θ [deg]");
    -ylabel!("Δtan(θ)/θ");
    -title!("Percentual error");
    -vline!([angles[findfirst(x->x==minimum(perc_error), perc_error)]], lw=3, label="Minimum error")
    +
    err = tan.(deg2rad.(angles .+ error)) .- tan.(deg2rad.(angles .- error))
    +perc_error = err .* 100 ./ μ
    +
    plot(angles, perc_error, xlim=(5,70), ylim=(0,200), color="lightblue", legend=true, lw=3, label="Percentual error");
    +xlabel!("θ [deg]");
    +ylabel!("Δtan(θ)/θ");
    +title!("Percentual error");
    +vline!([angles[findfirst(x->x==minimum(perc_error), perc_error)]], lw=3, label="Minimum error")

    So, now we see that the lowest percentual error is obtained when we work in angles near 45°, so we are good to go and we can use the data we measured adding the error in the angle. We now define the new model, where we include an uncertainty in the angle. We propose an uniform prior for the angle centered at 45°, the angle we think the measurement was done.

    -
    @model gravity_angle_uniform(t_final, x_final, θ) = begin
    -    # The number of observations.
    -    error = 15
    -    angle ~ Uniform(45 - error/2, 45 + error/2)
    -    g ~ LogNormal(log(4),0.3)
    -    μ = g .* (t_final.*t_final./(2 * tan.(deg2rad(angle))))
    -        
    -    N = length(t_final)
    -    for n in 1:N
    -        x_final[n] ~ Normal(μ[n], 10)
    -    end
    -end
    -
    chain_uniform_angle = sample(gravity_angle_uniform(t_measured, Δx_measured, θ), HMC(ϵ, τ), iterations, progress=false)
    -
    histogram(chain_uniform_angle[:g], legend=false, normalized=true);
    -xlabel!("g_mars");
    -ylabel!("Probability");
    -title!("Posterior distribution for g, including uncertainty in the angle")
    +
    @model gravity_angle_uniform(t_final, x_final, θ) = begin
    +    # The number of observations.
    +    error = 15
    +    angle ~ Uniform(45 - error/2, 45 + error/2)
    +    g ~ LogNormal(log(4),0.3)
    +    μ = g .* (t_final.*t_final./(2 * tan.(deg2rad(angle))))
    +        
    +    N = length(t_final)
    +    for n in 1:N
    +        x_final[n] ~ Normal(μ[n], 10)
    +    end
    +end
    +
    chain_uniform_angle = sample(gravity_angle_uniform(Δt, Δx, θ), NUTS(), iterations, progress=false)
    +
    histogram(chain_uniform_angle[:g], legend=false, normalized=true);
    +xlabel!("g_mars");
    +ylabel!("Probability density");
    +title!("Posterior distribution for g, including uncertainty in the angle")

    -
    -

    6.2.1 Calculating the escape velocity

    +
    +

    6.3.1 Calculating the escape velocity

    Now that we have calculated the gravity, we are going to calculate the escape velocity.

    What data do we have until now?

    we know from the begining that:

    @@ -509,26 +538,27 @@

    6.2.1 Calculating the escape velo

    so,

    \(\frac{v_{Mars}}{11} =\sqrt{\frac{g_{Mars}*2*R_{Mars}}{9.8*R_{Mars}}} \qquad \left[\frac{km}{s} \right]\)

    \(v_{Mars} =11 * \sqrt{\frac{g_{Mars}}{9.8*2}} \qquad \left[\frac{km}{s} \right]\)

    -
    v = 11 .* sqrt.(chain_uniform_angle[:g] ./ (9.8*2))
    -
    histogram(v, legend=false, normalized=true);
    -title!("Escape velocity from Mars");
    -xlabel!("Escape Velocity of Mars [km/s]");
    -ylabel!("Probability")
    +
    v = 11 .* sqrt.(chain_uniform_angle[:g] ./ (9.8*2))
    +
    histogram(v, legend=false, normalized=true);
    +title!("Escape velocity from Mars");
    +xlabel!("Escape Velocity of Mars [km/s]");
    +ylabel!("Probability density")

    Finally, we obtained the escape velocity scape from Mars.

    -
    -

    6.3 Summary

    +
    +

    6.4 Summary

    In this chapter we had to find the escape velocity from Mars. To solve this problem, we first needed to find the gravity of Mars, so we started with a physical description of the problem and concluded that by measuring the distance and time of a rock throw plus some Bayesian analysis we could infer the gravity of Mars.

    Then we created a simple probabilistic model, with the prior probability set to a uniform distribution and the likelihood to a normal distribution. We sampled the model and obtained our first posterior probability. -We repeated this process two more times, changing the prior distribution of the model for more accurate ones, first with a normal distribution and then with a logarithmic one.

    +We repeated this process two more times, changing the prior distribution of the model for more accurate ones, first with a Normal distribution and +then with a LogNormal one.

    Finally, we used the gravity we inferred to calculate the escape velocity from Mars.

    -
    -

    6.4 References

    +
    +

    6.5 References

    • Khatri, Poudel, Gautam, M.K., P.R., A.K. (2010). Principles of Physics. Kathmandu: Ayam Publication. pp. 170, 171. [ISBN]
    diff --git a/docs/reference-keys.txt b/docs/reference-keys.txt index fe912b21..830ba2c8 100644 --- a/docs/reference-keys.txt +++ b/docs/reference-keys.txt @@ -139,3 +139,4 @@ making-predictions evaluating-the-accuracy appendix---a-little-more-about-alpha section +proyectile-motion diff --git a/docs/search_index.json b/docs/search_index.json index 0759aae4..2370bded 100644 --- a/docs/search_index.json +++ b/docs/search_index.json @@ -1,12 +1 @@ -[ - [ - "spam-filter.html", - "Chapter 4 Spam filter 4.1 Naive Bayes: Spam or Ham? 4.2 The Training Data 4.3 Preprocessing the Data 4.4 The Naive Bayes Approach 4.5 Training the Model 4.6 Making Predictions 4.7 Evaluating the Accuracy 4.8 Summary 4.9 Appendix - A little more about alpha", - " Chapter 4 Spam filter 4.1 Naive Bayes: Spam or Ham? Nobody likes spam emails. How can Bayes help? In this chapter, we’ll keep expanding our data science knowledge with a practical example. A simple yet effective way of using Bayesian probability to create a spam filter from scratch will be introduced. The filter will examine emails and classify them as either spam or ham (the word for non-spam emails) based on their content. What we will be implementing here is a supervised learning model, in other words, a classification model that has been trained on previously classified data. Think of it like a machine to which you can give some input, like an email, and will give you some label to that input, like spam or ham. This machine has a lot of tiny knobs, and based on their particular configuration it will output some label for each input. Supervised learning involves iteratively finding the right configuration of these knobs by letting the machine make a guess with some pre-classified data, checking if the guess matches the true label, and if not, tune the knobs in some controlled way. The way our machine will make predictions is based on the underlying mathematical model. For a spam filter, a naive Bayes approach has proven to be effective, and you will have the opportunity to verify that yourself at the end of the chapter. In a naive Bayes model, Bayes’ theorem is the main tool for classifying, and it is naive because we make very loose assumptions about the data we are analyzing. This will be clearer once we dive into the implementation. 4.2 The Training Data For the Bayesian spam filter to work correctly, we need to feed it some good training data. In this context, that means having a large enough corpus of emails that have been pre-classified as spam or ham. The emails should be collected from a sufficiently heterogeneous group of people. After all, spam is a somewhat subjective category: one person’s spam may be another person’s ham. The proportion of spam vs. ham in our data should also be somewhat representative of the real proportion of emails we receive. Fortunately, there are a lot of very good datasets available online. We’ll use the “Email Spam Classification Dataset CSV” from Kaggle, a website where data science enthusiasts and practitioners publish datasets, participate in competitions, and share their knowledge. The dataset’s description included online helps us make sense of its contents: The .csv file contains 5,172 rows, one row for each email. There are 3,002 columns. The first column indicates Email name. The name has been set with numbers and not recipients’ name to protect privacy. The last column has the labels for prediction: for spam, for not spam. The remaining 3,000 columns are the 3,000 most common words in all the emails, after excluding the non-alphabetical characters/words. For each row, the count of each word(column) in that email(row) is stored in the respective cells. Let’s take a look at the data. The following code snippet outputs a view of the first and last rows of the dataset. raw_df = CSV.read("./04_naive_bayes/data/emails.csv", DataFrame) ## 5172×3002 DataFrame ## Row │ Email No. the to ect and for of a you ho ⋯ ## │ String15 Int64 Int64 Int64 Int64 Int64 Int64 Int64 Int64 In ⋯ ## ──────┼───────────────────────────────────────────────────────────────────────── ## 1 │ Email 1 0 0 1 0 0 0 2 0 ⋯ ## 2 │ Email 2 8 13 24 6 6 2 102 1 ## 3 │ Email 3 0 0 1 0 0 0 8 0 ## 4 │ Email 4 0 5 22 0 5 1 51 2 ## 5 │ Email 5 7 6 17 1 5 2 57 0 ⋯ ## 6 │ Email 6 4 5 1 4 2 3 45 1 ## 7 │ Email 7 5 3 1 3 2 1 37 0 ## 8 │ Email 8 0 2 2 3 1 2 21 6 ## ⋮ │ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ## 5166 │ Email 5166 1 0 1 0 3 1 12 1 ⋯ ## 5167 │ Email 5167 1 0 1 1 0 0 4 0 ## 5168 │ Email 5168 2 2 2 3 0 0 32 0 ## 5169 │ Email 5169 35 27 11 2 6 5 151 4 ## 5170 │ Email 5170 0 0 1 1 0 0 11 0 ⋯ ## 5171 │ Email 5171 2 7 1 0 2 1 28 2 ## 5172 │ Email 5172 22 24 5 1 6 5 148 8 ## 2993 columns and 5157 rows omitted As you can see, the output informs the amount of rows and columns and the type of each column and allows us to see a sample of the data. 4.3 Preprocessing the Data Before we use the data to train our filter, we need to preprocess it a little bit. First, we should filter out very common words, such as articles and pronouns, which will most likely add noise rather than information to our classification algorithm. all_words = names(raw_df)[2:end-1] all_words_text = join(all_words, " ") document = StringDocument(all_words_text) prepare!(document, strip_articles) prepare!(document, strip_pronouns) vocabulary = split(TextAnalysis.text(document)) clean_words_df = raw_df[!, vocabulary] data_matrix = Matrix(clean_words_df)' In the first line, we create a variable all_words to store a list of all the words present in the emails. As our dataset has a column for each word, we do this by storing the names of every column with the names function, except for the first and last column, which are for email id and for the spam or ham label, respectively. Let’s move on to the second and third lines of the code. We would like to filter out some words that are very common in the English language, such as articles and pronouns, which will most likely add noise rather than information to our classification algorithm. For this we will use two Julia packages that are specially designed for working with texts of any type. These are Languages.jl and TextAnalysis.jl. In the third line, we create a StringDocument, which is a struct provided by TextAnalysis.jl and we use its built-in methods to remove articles and pronouns from the list of words we created before. This is done by calling the prepare function two times, with two different flags: strip_articles and strip_pronouns. What follows is just code to recover our original DataFrame with only the relevant columns, i.e., the words that were not filtered. A clean_words_df DataFrame is created selecting those columns only. Finally, we turn our DataFrame into a matrix with its rows and columns transposed. This is just the convention used by the packages we are working with to make our analysis; each column is one data realization. Next, we need to divide the data in two: a training set and a testing set. This is standard practice when working with models that learn from data, like the one we’re going to implement. We’re going to train the model on the training set, and then evaluate the model’s accuracy by having it make predictions on the testing set. In Julia, the package MLDataUtils.jl has some nice functionalities for data manipulations like this. labels = raw_df.Prediction (x_train, y_train), (x_test, y_test) = splitobs(shuffleobs((data_matrix, labels)), at = 0.7) The function splitobs splits our dataset into a training set and a testing set, and shuffleobs randomizes the order of the data in the split. We pass a labels array to our split function so it knows how to properly split the dataset. Now we can turn our attention to building the spam filter. 4.4 The Naive Bayes Approach As we mentioned, what we are facing here is a classification problem, and we will code from scratch and use a supervised learning algorithm to find a solution with the help of Bayes’ theorem. We’re going to use a naive Bayes classifier to create our spam filter. We’re going to use a classifier to create our spam filter. This method is going to treat each email just as a collection of words, with no regard for the order in which they appear. This means we won’t take into account semantic considerations like the particular relationship between words and their context. Our strategy will be to estimate a probability of an incoming email being ham or spam and make a decision based on that. Our general approach can be summarized as: \\(P(spam|email) \\propto P(email|spam)P(spam)\\) \\(P(ham|email) \\propto P(email|ham)P(ham)\\) Notice we use the \\(\\propto\\) sign, meaning proportional to, instead of the = sign because the denominator from Bayes’s theorem is missing. In this case, we won’t need to calculate it, as it’s the same for both probabilities and all we’re going to care about is a comparison of these two probabilities. In this naive approach, where semantics aren’t taken into account and each email is just a collection of words, the conditional probability \\(P(email|spam)\\) means the probability that a given email can be generated with the collection of words that appear in the spam category of our data. Let’s take a quick example. Imagine for a moment that our training set of emails consists just of these three emails, all labeled as spam: Email 1: ‘Are you interested in buying my product?’ Email 2: ‘Congratulations! You’ve won $1000!’ Email 3: ‘Check out this product!’ Also imagine we receive a new, unclassified email and we want to discover \\(P(email|spam)\\). The new email looks like this: New email: ‘Apply and win all these products!’ The new email contains the words win and product, which are rather common in our example’s training data. We would therefore expect \\(P(email|spam)\\), the probability of the new email being generated by the words encountered in the training spam email set, to be relatively high. (The word \\emph{win} appears in the form \\emph{won} in the training set, but that’s OK. The standard linguistic technique of \\emph{lemmatization} groups together any related forms of a word and treats them as the same word.) Mathematically, the way to calculate \\(P(email|spam)\\) is to take each word in our target email, calculate the probability of it appearing in spam emails based on our training set, and multiply those probabilties together. \\(P(email|spam) = \\prod_{i=1}^{n}P(word_i|spam)\\) We use a similar calculation to determine \\(P(email|ham)\\), the probability of the new email being generated by the words encountered in the training ham email set: \\(P(email|ham) = \\prod_{i=1}^{n}P(word_i|ham)\\) The multiplication of each of the probabilities associated with a particular word here stems from the naive assumption that all the words in the email are statistically independent. In reality, this assumption isn’t necessarily true. In fact, it’s most likely false. Words in a language are never independent from one another, but this simple assumption seems to be enough for the level of complexity our problem requires. The probability of a given word \\(word_i\\) being in a given category is calculated like so: \\[P(word_i|spam) = \\frac{N_{word_i|spam} + \\alpha}{N_{spam} + \\alpha N_{vocabulary}}\\] \\[P(word_i|ham) = \\frac{N_{word_i|ham} + \\alpha}{N_{ham} + \\alpha N_{vocabulary}}\\] These formulas tell us exactly what we have to calculate from our data. We need the numbers \\(N_{word_i|spam}\\) and \\(N_{word_i|ham}\\) for each word, meaning the number of times that \\(word_i\\) is used in the spam and ham categories, respectively. \\(N_{spam}\\) and \\(N_{ham}\\) are the total number of words used in the spam and ham categories (including all word repetitions), and \\(N_{vocabulary}\\) is the total number of unique words in the dataset. The variable \\(\\alpha\\) is a smoothing parameter that prevents the probability of a given word being in a given category from going down to zero. If a given word hasn’t appeared in the spam category in our training dataset, for example, we don’t want to assign it zero probability of appearing in new spam emails. As all of this information will be specific to our dataset, a clever way to aggregate it is to use a Julia struct, with attributes for the pieces of data we’ll need to access over and over during the prediction process. Here’s the implementation: mutable struct BayesSpamFilter words_count_ham::Dict{String, Int64} words_count_spam::Dict{String, Int64} N_ham::Int64 N_spam::Int64 vocabulary::Array{String} BayesSpamFilter() = new() end The relevant attributes of the struct are words_count_ham and words_count_spam, two dictionaries containing the frequency of appearance of each word in the ham and spam datasets; N_ham and N_spam, the total number of words appearing in each category; and vocabulary, an array of all the unique words in the dataset. The line BayesSpamFilter() = new() is the constructor of this struct. Because the constructor is empty, all the attributes will be undefined when we instantiate the filter. We’ll have to define some functions to fill these variables with values that are relevant to our particular problem. First, here’s a function word_count that counts the occurrences of each word in the ham and spam categories. Now we are going to define some functions that will be important for our filter implementation. function words_count(word_data, vocabulary, labels, spam=0) count_dict = Dict{String,Int64}() n_emails = size(word_data)[2] for (i, word) in enumerate(vocabulary) count_dict[word] = sum([word_data[i, j] for j in 1:n_emails if labels[j] == spam]) end return count_dict end The function word_count counts the occurrences of each word in the ham and spam categories. One of its parameters is word_data, which we defined before and is a matrix where each column is an email and each row is a word. Next, we’ll define a fit! function for our spam filter struct. Notice we’re using the bang (!) convention here to indicate a function that modifies its arguments in-place (in this case, the spam filter struct itself). This function fits our model to the data, a typical procedure in data science and machine learning areas. function fit!(model::BayesSpamFilter, x_train, y_train, voc) model.vocabulary = voc model.words_count_ham = words_count(x_train, model.vocabulary, y_train, 0) model.words_count_spam = words_count(x_train, model.vocabulary, y_train, 1) model.N_ham = sum(values(model.words_count_ham)) model.N_spam = sum(values(model.words_count_spam)) return end ## fit! (generic function with 1 method) What we mean by fitting the model to the data is mainly filling all the undefined parameters in our struct with values informed by the training data. To do this, we use the words_count function we defined earlier. Notice that we’re only fitting the model to the training portion of the data, since we’re reserving the testing portion to evaluate the model’s accuracy. 4.5 Training the Model Now it’s time to instantiate our spam filter and fit the model to the training data. With the struct and helper functions we’ve defined, the process is quite straightforward. spam_filter = BayesSpamFilter() fit!(spam_filter, x_train, y_train, vocabulary) We create an instance of our BayesSpamFilter struct and pass it to our fit! function along with the data. Notice that we’re only passing in the training portion of the dataset, since we want to reserve the testing portion to evaluate the model’s accuracy later. 4.6 Making Predictions Now that we have our model, we can use it to make some spam vs. ham predictions and assess its performance. We’ll define a few more functions to help with this process. First, we need a function implementing the TAL formula that we discussed earlier. function word_spam_probability(word, words_count_ham, words_count_spam, N_ham, N_spam, n_vocabulary, α) ham_prob = (words_count_ham[word] + α) / (N_ham + α * (n_vocabulary)) spam_prob = (words_count_spam[word] + α) / (N_spam + α * (n_vocabulary)) return ham_prob, spam_prob end ## word_spam_probability (generic function with 1 method) This function calculates \\(P(word_i|spam)\\) and \\(P(word_i|ham)\\) for a given word. We’ll call it for each word of an incoming email within another function, spam_predict, to calculate the probability of that email being spam or ham. function spam_predict(email, model::BayesSpamFilter, α, tol=100) ngrams_email = ngrams(StringDocument(email)) email_words = keys(ngrams_email) n_vocabulary = length(model.vocabulary) ham_prior = model.N_ham / (model.N_ham + model.N_spam) spam_prior = model.N_spam / (model.N_ham + model.N_spam) if length(email_words) > tol word_freq = values(ngrams_email) sort_idx = sortperm(collect(word_freq), rev=true) email_words = collect(email_words)[sort_idx][1:tol] end email_ham_probability = BigFloat(1) email_spam_probability = BigFloat(1) for word in intersect(email_words, model.vocabulary) word_ham_prob, word_spam_prob = word_spam_probability(word, model.words_count_ham, model.words_count_spam, model.N_ham, model.N_spam, n_vocabulary, α) email_ham_probability *= word_ham_prob email_spam_probability *= word_spam_prob end return ham_prior * email_ham_probability, spam_prior * email_spam_probability end This function takes as input a new email that we want to classify as spam or ham, our fitted model, an \\(α\\) value (which we’ve already discussed), and a tolerance value tol. The latter sets the maximum number of unique words in an email that we’ll look at. We saw that the calculations for \\(P(email|spam)\\) and \\(P(email|ham)\\) require the multiplication of each \\(P(word_i|spam)\\) and \\(P(word_i|ham)\\) term. When emails consist of a large number of words, this multiplication may lead to very small probabilities, up to the point that the computer interprets those probabilities as zero. This isn’t desirable; we need values of \\(P(email|spam)\\) and \\(P(email|ham)\\) that are larger than zero in order to multiply them by \\(P(spam)\\) and \\(P(ham)\\), respectively, and compare these values to make a prediction. To avoid probabilities of zero, we’ll only consider up to the tol most frequently used words in the email. Finally, we arrive to the point of actually testing our model. We create another function to manage the process. This function classifies each email into Ham (represented by the number 0) or Spam (represented by the number 1) function get_predictions(x_test, y_test, model::BayesSpamFilter, α, tol=200) N = length(y_test) predictions = Array{Int64,1}(undef, N) for i in 1:N email = string([repeat(string(word, " "), N) for (word, N) in zip(model.vocabulary, x_test[:, i])]...) pham, pspam = spam_predict(email, model, α, tol) pred = argmax([pham, pspam]) - 1 predictions[i] = pred end predictions end This function takes in the testing portion of the data and our trained model. We call our spam_predict function for each email in the testing data and use the maximum (argmax) of the two returned probability values to predict (pred) if the email is spam or ham. We return the predictions as an array of values, which will contain zeros for ham emails, and ones for spam emails. Here we call the function to make predictions about the test data: predictions = get_predictions(x_test, y_test, spam_filter, 1) Let’s take a look at the predicted classifications of just the first five emails in the test data. predictions[1:5] ## 5-element Vector{Int64}: ## 0 ## 0 ## 1 ## 0 ## 1 Of the first five emails, one (the third) was classified as spam, and the rest were classified as ham. 4.7 Evaluating the Accuracy Looking at the predictions themselves is pretty meaningless; what we really want to know is the model’s accuracy. We’ll define another function to calculate this. function spam_filter_accuracy(predictions, actual) N = length(predictions) correct = sum(predictions .== actual) accuracy = correct / N accuracy end This function compares the predicted classifications with the actual classifications of the test data, counts the number of correct predictions, and divides this number by the total number of test emails, giving us an accuracy measurement. Here we call the function: spam_filter_accuracy(predictions, y_test) ## 0.9510309278350515 The output indicates our model is about 95 percent accurate. It appears our model is performing very well! Such a high accuracy rate is quite astonishing for a model so naive and simple. In fact, it may be a little too good to be true, because we have to take into account one more thing. Our model classifies emails into spam or ham, but the amount of ham emails in our data set is considerably higher than the spam ones. Let’s see the percentages: sum(raw_df[!, :Prediction])/length(raw_df[!, :Prediction]) ## 0.2900232018561485 To calculate the proportion of spam to ham emails, we sum over the Prediction column of the dataset remembering it only consists of 0s and 1s, and then we divide by the total amount of emails.This type of classification problem, where there’s an unequal distribution of classes in the dataset, is called imbalanced. With unbalanced data, a better way to see how the model is performing is to construct a confusion matrix, an \\(N \\times N\\) matrix, where \\(N\\) is the number of target classes (in our case, 2, for spam and ham). The matrix compares the actual values for each class with those predicted by the model. Here’s a function that builds a confusion matrix for our spam filter: function spam_filter_confusion_matrix(y_test, predictions) # 2x2 matrix is instantiated with zeros confusion_matrix = zeros((2, 2)) confusion_matrix[1, 1] = sum(isequal(y_test[i], 0) & isequal(predictions[i], 0) for i in 1:length(y_test)) confusion_matrix[1, 2] = sum(isequal(y_test[i], 1) & isequal(predictions[i], 0) for i in 1:length(y_test)) confusion_matrix[2, 1] = sum(isequal(y_test[i], 0) & isequal(predictions[i], 1) for i in 1:length(y_test)) confusion_matrix[2, 2] = sum(isequal(y_test[i], 1) & isequal(predictions[i], 1) for i in 1:length(y_test)) # Now we convert the confusion matrix into a DataFrame confusion_df = DataFrame(prediction=String[], ham_mail=Int64[], spam_mail=Int64[]) confusion_df = vcat(confusion_df, DataFrame(prediction="Model predicted Ham", ham_mail=confusion_matrix[1, 1], spam_mail=confusion_matrix[1, 2])) confusion_df = vcat(confusion_df, DataFrame(prediction="Model predicted Spam", ham_mail=confusion_matrix[2, 1], spam_mail=confusion_matrix[2, 2])) return confusion_df end Now let’s call our function to build the confusion matrix for our model. confusion_matrix = spam_filter_confusion_matrix(y_test[:], predictions) ## 2×3 DataFrame ## Row │ prediction ham_mail spam_mail ## │ String Float64 Float64 ## ─────┼─────────────────────────────────────────── ## 1 │ Model predicted Ham 1056.0 33.0 ## 2 │ Model predicted Spam 43.0 420.0 Row 1 of the confusion matrix shows us all the times our model classified emails to be ham; 1,056 of those classifications were correct and 36 were incorrect. Similarly, the spam_mail column shows us the classifications for all the spam emails; 36 were misidentified as ham, and 427 were correctly identified as spam. Now that we have the confusion matrix, we can calculate the accuracy of the model segmented by category. ham_accuracy = confusion_matrix[1, :ham_mail] / (confusion_matrix[1, :ham_mail] + confusion_matrix[2, :ham_mail]) ## 0.9608735213830755 spam_accuracy = confusion_matrix[2, :spam_mail] / (confusion_matrix[1, :spam_mail] + confusion_matrix[2, :spam_mail]) ## 0.9271523178807947 With these values now we have a more fine-grained measure of the accuracy of our model. Now we know that our spam filter doesn’t have the same degree of accuracy for spam and for ham emails. As a consequence of the imbalance in our data, ham emails will be classified as such more accurately than spam emails. Still, with both percentages above 90, the accuracy is pretty good for a model so simple and naive. Models like these can be used like a baseline for creating more complex ones on top of them. 4.8 Summary In this chapter, we’ve used a naive Bayes approach to build a simple email spam filter. We walked through the whole process of training, testing, and evaluating a learning model. First, we obtained a dataset of emails already classified as spam or ham and preprocessed the data. Then we considered the theoretical framework for our naive analysis. Using Bayes’s theorem on the data available, we assigned a probability of belonging to a spam or ham email to each word of the email dataset. The probability of a new email being classified as spam is therefore the product of the probabilities of each of its constituent words. We defined a Julia struct for the spam filter object and created functions to fit the spam filter object to the data. Finally, we made predictions on new data and evaluated our model’s performance by calculating the accuracy and making a confusion matrix. 4.9 Appendix - A little more about alpha As we have seen, to calculate the probability of the email being a spam email, we should use \\(P(email|spam)=∏ni=1P(wordi|spam)=P(word0|spam)P(word1|spam)...P(wordnp|spam)\\) where P(wordnp|spam) stands for the probability of the word that is not presen t in our dataset. What probability should be assigned to this word? One way to handle this could be to simply ignore that term in the multiplication. In other words, assigning P(wordnp|spam)=1. Without thinking about it too much, we can conclude that this doesn’t make any sense, since that would mean that the probability to find that word in a spam (or ham, too) email would be equal to 1. A more logically consistent approach would be to assign 0 probability to that word. But there is a problem: with \\(P(wordnp\\|spam)=0\\) we can quickly see that \\(P(word0|spam)P(word1|spam)...P(wordnp|spam)=0\\) This is the motivation for introducing the smoothing parameter α into our equation. In a real-word scenario, we should expect that words not present in our training set will appear, and altough it makes sense that they don’t have a high probability, it can’t be 0. When such a word appears, the probability assigned for it will be simply \\(P(wordnp|spam)=Nwordnp|spam+αNspam+αNvocabulary=0+αNspam+αNvocabulary=αNspam+αNvocabulary\\) In summary, α is just a smoothing parameter, so that the probability of finding a word that is not in our dataset, doesn’t go down to 0. Since we want to keep the probability for these words low enough, it makes sense to use α=1 " - ], - [ - "404.html", - "Page not found", - " Page not found The page you requested cannot be found (perhaps it was moved or renamed). You may want to try searching to find the page's new location, or use the table of contents to find the page you are looking for. " - ] -] \ No newline at end of file +[["escaping-from-mars.html", "Chapter 6 Escaping from Mars 6.1 Proyectile Motion 6.2 Calculating the constant g of Mars 6.3 Optimizing the throwing angle 6.4 Summary 6.5 References", " Chapter 6 Escaping from Mars Suppose you landed on Mars by mistake and you want to leave that rocky planet and return home. To escape from a planet you need a very important piece of information: the escaping velocity from the planet. What on Mars is the escape velocity? We are going to use the same experiment that Newton thought when thinking about escaping from the gravity of Earth. Gravity pulls us down, so if we shoot a cannonball, as in the sketch shown below, what will happen? For some velocities, the cannonball will return to Earth, but there’s a velocity at which it scapes since the gravitational pull is not enough to bring it back to the surface. That velocity is called escape velocity. Assuming that the planet is spherical and considering only the gravitational effect of Mars, the escape velocity can be described as \\(v_{escape}=\\sqrt{2*g_{planet}*r_{planet}}\\) where \\(r\\) is the radius of the planet and \\(g\\) the gravity constant at the surface of Mars. Suppose that we remember from school that the escape velocity from Earth is \\(11\\frac{km}{s}\\) and that the radius of Mars if half of the earth’s. We remember that the gravity of Earth at its surface is \\(9.8\\frac{m}{s^2}\\), so all we need to estimate the escape velocity of Mars is the gravity of the planet at its surface. So we decide to make an experiment and gather some data. But what exactly do you need to measure? Let’s see. We are going to calculate the constant \\(g_{mars}\\) just throwing stones. We are going to explain a bit the equations regarding the experiment. The topic we need to revisit is Proyectile Motion. 6.1 Proyectile Motion Gravity pulls us down to Earth, or in our case, to Mars. This means that we have an acceletation, since there is a force. Recalling the newton equation: \\(\\overrightarrow{F} = m * \\overrightarrow{a}\\) where \\(m\\) is the mass of the object, \\(\\overrightarrow{F}\\) is the force(it’s what make us fall) and \\(\\overrightarrow{a}\\) is the acceleration, in our case is what we call gravity \\(\\overrightarrow{g}\\). The arrow \\(\\overrightarrow{}\\) over the letter means that the quantity has a direction in space, in our case, gravity is pointing to the center of the Earth, or Mars. How can we derive the motion of the stones with that equation? In the figure below we show a sketch of the problem: We have the 2 axis, \\(x\\) and \\(y\\), the \\(x\\) axis is parallel to the ground and the \\(y\\) axis is perpendicular, pointing to the sky. We also draw the initial velocity \\(v_0\\) of the proyectile, and the angle \\(\\theta\\) with respect to the ground. Also it’s important to notice that the gravity points in the opposite direction of the \\(y\\) axis. But what is the trajectory of the projectile? and how do the coordinates x and y evolve with time? If we remember from school, the equations of \\(x\\) and \\(y\\) over time is: \\(x(t) = v_0*t*cos(θ)\\) \\(y(t) = v_0*t*sin(θ) -\\frac{g*t^2}{2}\\) where \\(t\\) is the time at which we want to know the coordinates. What do these equations tell us? If we see the evolution of the projectile in the \\(x\\) axis only, it follows a straight line (until it hits the ground) and in the \\(y\\) axis the movement follows a parabola, but how do we interpret that? We can imagine what happens if we trow a stone to the sky: the stone starts to go up and then, at some point, it reaches the highest position it can go. Then, the stone starts to go down. How does the velocity evolve in this trajectory? Since the begining, the velocity starts decreasing until it has the value of 0 at the highest point, where the stone stops for a moment, then it changes its direction and start to increase again, pointing towards the ground. Ploting the evolution of the height of the stone, we obtain the plot shown below. We see that, at the begining the stone starts to go up fast and then it slows down. We see that for each value of \\(y\\) there are 2 values of \\(t\\) that satisfies the equation, thats because the stone pass twice for each point, except for the highest value of \\(y\\). So, in the example we have just explained, we have that the throwing angle is θ=90°, so sin(90°)=1, the trajectory in \\(y\\) becomes: \\(y(t) = v_0*t -\\frac{g*t^2}{2}\\) And the velocity, which is the derivative of the above equation becomes: \\(v_{y}(t) = v_{0} -g*t\\) Those two equations are the ones plotted in the previous sketch, a parabola and a straight line that decreases with time. It’s worth to notice that at each value \\(y\\) of the trajectory, the velocity could have 2 values, just differing in its sign, meaning it can has 2 directions, but with the same magnitude. So keep in mind that when you throw an object to the sky, when it returns to you, the velocity will be the same with the one you threw it. 6.2 Calculating the constant g of Mars Now that we have understood the equations we will work with, we ask: how do we set the experiment and what do we need to measure? The experiment set up will go like this: - One person will be throwing stones with an angle. - The other person will be far, watching from some distance, measuring the time since the other throw the stone and it hits the ground. The other measurement we will need is the distance Δx the stone travelled. - Also, for the first iteration of the experiment, suppose we only keep the measurements with an initial angle θ~45° (we will loosen this constrain in a bit). Let’s use the computer to help with all these calculations. As always, we first import the needed libraries using Distributions using StatsPlots using Turing Suppose we did the experiment and we have measured then the 5 points, Δx and Δt, shown below: Δx = [25.94, 38.84, 52.81, 45.54, 17.24] Δt = [3.91, 4.57, 5.43, 4.85, 3.15] Now, how do we estimate the constant g from those points? When the stone hits the ground, we have that \\(y(t) = 0\\). If we solve for \\(t\\), ignoring the trivial solution \\(t = 0\\), we find that \\(t_{f} = \\frac{2*v_{0}*sin(θ)}{g}\\) where \\(t_{f}\\) is the time at which the stone hits the ground, the time we have measured. And replacing this time in the x(t) equation we find that: \\(Δx=t_{f}*v_{0}*cos(θ)\\) where Δx is the distance traveled by the stone. So, solving for \\(v_{0}\\), the initial velocity, an unknown quantity, we have: \\(v_{0}=\\frac{Δx}{t_{f}cos(θ)}\\) Then replacing it in the equation of \\(t_{f}\\) and solving for \\(\\Delta x\\) we obtain: \\(Δx=\\frac{g*t_{f}^2}{2*tg(θ)}\\) So, the model we are going to propose is a linear regression. A linear equation has the form: \\(y=m*x +b\\) where \\(m\\) is the slope of the curve and \\(b\\) is called the intercept. In our case, if we take \\(x\\) to be \\(\\frac{t_{f}^2}{2}\\), the slope of the curve is g and the intercep is 0. So, in our linear model we are going to propose that each point in the curve is: \\(\\mu = m*x + b\\) \\(y \\sim Normal(\\mu,\\sigma^2)\\) So, what this says is that each point of the regresion is drawn from a gaussian distribution with its center correnponding with a point in the line, as shown in the plot below. So, our linear model will be: \\(g \\sim Distribution\\ to\\ be\\ proposed\\) \\(\\mu[i] = g*\\frac{t_{f}^2[i]}{2}\\) \\(\\Delta x[i]= Normal(\\mu[i],\\sigma^2)\\) Where \\(g\\) has a distribution we will propose next. The first distribution we are going to propose is a uniform distribution for g, between the values of 0 and 10 plot(Uniform(0,10),xlim=(-1,11), ylim=(0,0.2), legend=false, fill=(0, .5, :lightblue)); xlabel!("g_mars"); ylabel!("Probability density"); title!("Uniform prior distribution for g") Now we define the model in Turing and sample from the posterior distribution. @model gravity_uniform(t_final, x_final, θ) = begin # The number of observations. N = length(t_final) g ~ Uniform(0,10) μ = g .* 1/2 .* t_final.^2 for n in 1:N x_final[n] ~ Normal(μ[n], 10) end end iterations = 10000 θ = 45 chain_uniform = sample(gravity_uniform(Δt, Δx, θ), NUTS(), iterations, progress=false); Plotting the posterior distribution for p we that the values are mostly between 2 and 5, with the maximun near 3,8. Can we narrow the values we obtain? histogram(chain_uniform[:g], xlim=[1,6], legend=false, normalized=true); xlabel!("g_mars"); ylabel!("Probability density"); title!("Posterior distribution for g with Uniform prior") As a second option, we propose a Gaussian distribution instead of a Uniform distribution for \\(g\\), like the one shown below, with a mean of 5 and a variance of 2, and let the model update its beliefs with the points we have. plot(Normal(5,2), legend=false, fill=(0, .5,:lightblue)); xlabel!("g_mars"); ylabel!("Probability density"); title!("Normal prior distribution for g") We define then the model with a gaussian distribution as a prior for \\(g\\): @model gravity_normal(t_final, x_final, θ) = begin # The number of observations. N = length(t_final) g ~ Normal(6,2) μ = g .* 1/2 .* t_final.^2 for n in 1:N x_final[n] ~ Normal(μ[n], 3) end end Now we sample values from the posterior distribution and plot and histogram with the values obtained: chain_normal = sample(gravity_normal(Δt, Δx, θ), NUTS(), iterations, progress=false) histogram(chain_normal[:g], xlim=[3,4.5], legend=false, normalized=true, title="Posterior distribution for g with Normal prior"); xlabel!("g_mars"); ylabel!("Probability density"); title!("Posterior distribution for g with Normal prior") We see that the plausible values for the gravity have a clear center in 3.7 and now the distribution is narrower, that’s good, but we can do better. If we observe the prior distribution proposed for \\(g\\), we see that some values are negative, which makes no sense because if that would the case when you trow the stone, it would go up and up, escaping from the planet. We propose then a new model for not allowing the negative values to happen. The distribution we are interested in is a LogNormal distribution. In the plot below is the prior distribution for g, a LogNormal distribution with mean 1.5 and variance of 0.5. plot(LogNormal(1,0.5), xlim=(0,10), legend=false, fill=(0, .5,:lightblue)); xlabel!("g_mars"); ylabel!("Probability density"); title!("Prior LogNormal distribution for g") The model gravity_lognormal defined below has now a LogNormal prior. We sample the posterior distribution after updating with the data measured. @model gravity_lognormal(t_final, x_final, θ) = begin # The number of observations. N = length(t_final) g ~ LogNormal(0.5,0.5) μ = g .* 1/2 .* t_final.^2 for n in 1:N x_final[n] ~ Normal(μ[n], 3) end end chain_lognormal = sample(gravity_lognormal(Δt, Δx, θ), NUTS(), iterations, progress=false) histogram(chain_lognormal[:g], xlim=[3,4.5], legend=false, title="Posterior distribution for g with LogNormal prior", normalized=true); xlabel!("g_mars"); ylabel!("Probability density"); title!("Posterior distribution for g with LogNormal prior") 6.3 Optimizing the throwing angle Now that we have a good understanding of the equations and the overall problem, we are going to add some difficulties and we will loosen a constrain we have imposed: Suppose that the device employed to measure the angle has an error of 15°, no matter the angle. We want to know what are the most convenient angle to do the experiment and to measure or if it doesn’t matter. To do the analysis we need to see how the angle influence the computation of \\(g\\), so solving the equation for \\(g\\) we have: \\(g = \\frac{2*tg(\\theta)*\\Delta x}{t^{2}_f}\\) We can plot then the tangent of θ, with and error of 15° and see what is its maximum and minimun value: angles = 0:0.1:70 error = 15/2 μ = tan.(deg2rad.(angles)) ribbon = tan.(deg2rad.(angles .+ error)) - μ plot(angles, μ, ribbon=ribbon, color="lightblue", legend=false); xlabel!("θ [deg]"); ylabel!("tan(θ)"); title!("tan(θ) and its error") But we don’t care about the absolute value of the error, we want the relavite error, so plotting the percentual error we have: err = tan.(deg2rad.(angles .+ error)) .- tan.(deg2rad.(angles .- error)) perc_error = err .* 100 ./ μ plot(angles, perc_error, xlim=(5,70), ylim=(0,200), color="lightblue", legend=true, lw=3, label="Percentual error"); xlabel!("θ [deg]"); ylabel!("Δtan(θ)/θ"); title!("Percentual error"); vline!([angles[findfirst(x->x==minimum(perc_error), perc_error)]], lw=3, label="Minimum error") So, now we see that the lowest percentual error is obtained when we work in angles near 45°, so we are good to go and we can use the data we measured adding the error in the angle. We now define the new model, where we include an uncertainty in the angle. We propose an uniform prior for the angle centered at 45°, the angle we think the measurement was done. @model gravity_angle_uniform(t_final, x_final, θ) = begin # The number of observations. error = 15 angle ~ Uniform(45 - error/2, 45 + error/2) g ~ LogNormal(log(4),0.3) μ = g .* (t_final.*t_final./(2 * tan.(deg2rad(angle)))) N = length(t_final) for n in 1:N x_final[n] ~ Normal(μ[n], 10) end end chain_uniform_angle = sample(gravity_angle_uniform(Δt, Δx, θ), NUTS(), iterations, progress=false) histogram(chain_uniform_angle[:g], legend=false, normalized=true); xlabel!("g_mars"); ylabel!("Probability density"); title!("Posterior distribution for g, including uncertainty in the angle") 6.3.1 Calculating the escape velocity Now that we have calculated the gravity, we are going to calculate the escape velocity. What data do we have until now? we know from the begining that: \\(R_{Earth}\\approxeq 2R_{Mars}\\) \\(g_{Earth}\\approxeq 9.8\\) and we have also computed the distribution of the plausible values of \\(g_{Mars}\\). So, replacing them in the equation of the escape velocity: \\(\\frac{v_{Mars}}{v_{Earth}} =\\sqrt{\\frac{g_{Mars}*R_{Mars}}{g_{Earth}*R_{Earth}}}\\) so, \\(\\frac{v_{Mars}}{11} =\\sqrt{\\frac{g_{Mars}*2*R_{Mars}}{9.8*R_{Mars}}} \\qquad \\left[\\frac{km}{s} \\right]\\) \\(v_{Mars} =11 * \\sqrt{\\frac{g_{Mars}}{9.8*2}} \\qquad \\left[\\frac{km}{s} \\right]\\) v = 11 .* sqrt.(chain_uniform_angle[:g] ./ (9.8*2)) histogram(v, legend=false, normalized=true); title!("Escape velocity from Mars"); xlabel!("Escape Velocity of Mars [km/s]"); ylabel!("Probability density") Finally, we obtained the escape velocity scape from Mars. 6.4 Summary In this chapter we had to find the escape velocity from Mars. To solve this problem, we first needed to find the gravity of Mars, so we started with a physical description of the problem and concluded that by measuring the distance and time of a rock throw plus some Bayesian analysis we could infer the gravity of Mars. Then we created a simple probabilistic model, with the prior probability set to a uniform distribution and the likelihood to a normal distribution. We sampled the model and obtained our first posterior probability. We repeated this process two more times, changing the prior distribution of the model for more accurate ones, first with a Normal distribution and then with a LogNormal one. Finally, we used the gravity we inferred to calculate the escape velocity from Mars. 6.5 References Khatri, Poudel, Gautam, M.K., P.R., A.K. (2010). Principles of Physics. Kathmandu: Ayam Publication. pp. 170, 171. [ISBN] "],["404.html", "Page not found", " Page not found The page you requested cannot be found (perhaps it was moved or renamed). You may want to try searching to find the page's new location, or use the table of contents to find the page you are looking for. "]]