Demonstrates using reproducible data visualisations for augmenting redaction decisions during small cell supression and creating documentaion transparent for non-technical audit.
Demonstrating how to 1) build interactive visualizationsusing `plotly::ggplotly()`, 2) compute relative timelines for each country and 3) plot sequence of key events for cross-country comparison.
A list of learning resources that I like having on speed dial
Recent example of 1) interpreting models through graphs rather than parameters 2) using self-contains RMarkdown notebook vs .R + .Rmd split
The workshop introduces R and RStudio and makes the case for project-oriented workflows for applied data analysis. Using logistic regression on Titanic data as an example, the participants will learn to communicate statistical findings more effectively, and will evaluate the advantages of using computational notebooks in RStudio to disseminate the results
Visualising results of statistical modeling is a key component of data science workflow. Statistical graphs often is the best means to explain and promote research findings. However,in order to find that one graph that tells the story worth sharing, we sometimes have to try out and sift through many data visualizations. How should we approach such a task? What can we do to make it easier from both production and evaluation perspectives?
Demonstrates the methods of suppressing small counts in a provincial surveillance system in preparation of data for public release.
Visualising results of statistical modeling is a key component of data science workflow. Statistical graphs are often the best means to explain and promote research findings. However, in order to find that one graph that tells the story worth sharing, we sometimes have to try out and sift through many data visualizations. How should we approach such a task? What can we do to make it easier from both production and evaluation perspectives?
Abstract While computational notebooks offer scientists and engineers many helpful features, the limitations of this medium make it but a starting point in creating software - the practical goal of data science. Where do we go from computational notebooks if our projects require multiple interconnected scripts and dynamic documents? How do we ensure reproducibility amidst growing complexity of analyses and operations?
I will use a concrete analytical example to demonstrate how constructing workflows for reproducible analyses can serve as the next step from computational notebooks towards creating an analytical software.
Visualizing the variability in clinical histories of patients with confirmed diagnosis of (1) schizophrenia and (2) bipolar disorders using cross-continuum clinical records.