LinkedIn and Medium – Places to Share

Where to put “content” and where to “demo” functionality!

Recently, I’ve been looking more into posting on LinkedIn and Medium. In addition to my personal blog, I wanted a place to highlight my interests and what I’m working on. I’ve used knowledge management and sharing tools like Confluence, SharePoint and OneNote to post my thoughts on work related topics on “intranets” in places where I’ve worked.

Now, I’d like to post more online so I can find information quickly and share with others. As part of this I’m using this blog, my LinkedIn space and Medium as places to post and share.

The idea is to expand on my writing and capture more “online” than in OneNote. I’m also looking at Notion.ai which may be where I end up putting more “content” this year.

Online Examples of Work

Over the last couple of years, I’ve created visualizations using R, Python, Power BI, Tableau and Qlik. Here are a couple of places where I’ve put these examples.

RStudio – as best I can remember, I was thinking about using RPubs to host content 4 years ago. https://rpubs.com/ericfrayer

Published MERA Bar plot to RPubs

Power BI – Using the same dataset only loaded from a parquet format (ADSL Gen2) into Power BI the same visualization can be created.

Data Loaded from Parquet file in Data Lake
Transform Data – Power Query – Data Load of Home Owner Parquet File

With the data has been loaded into Power BI a simple bar chart visualization can be created.

Python – Of course, this same dataset can be loaded into Python as a csv and using Visual Studio Code yet another visualization can be created.

Final version – Bar chart showing relationship between Claims History and Local Weather Conditions

It should be noted, GPT-4 helped with the code and also creating a “statistically significant” relationship in the data with the Local Weather Conditions (either normal or severe) adversely impacting Claims History.

Data Literacy

Recently, I worked on developing and delivering a Data Literacy for Decision Makers Workshop. It was a great experience which required me to work on my own soft and technical skills.

Read more about my experience here! Note: this is on Medium. You don’t need a subscription to read but you will be prompted to “sign in”. Feel free to close the prompt to and read the content. Or maybe consider a subscription to Medium – it’s a great site!

GPT-4 and Code Interpreter are hotter than Barbie!

Happy Summer! I’m looking forward to seeing Oppenheimer and Barbie on vacation next week. Both are summer blockbusters which are hot, hot, hot! Explosive and blowing up everywhere. So… What could be bigger and a “real-life actual” game changer?
Generative Pre-trained Transformer (GPT-4) and Large Language Models (LLM’s).

It’s hard to describe how jaw-dropping Open AI GPT-4 Plus is and for only $20 per month how it can change your life. The ability to load a dataset, run analysis, plot the results, and have the python code available with narrative describing the rationale behind advances statistics is unbelievable. It’s clean, fast, and overall, technically accurate. Note: I’ve executed the python code generated by GPT-4 in my own Jupiter notebook and Visual Studio Code to check the results.

I’ll post additional thoughts on my node.js Azure sandbox but don’t wait for me – go get a subscription and try it out for yourself!

Data Warehouse, Data Lakehouse and Data Mesh

Last year, I read a very interesting blog post by Darwin Schweitzer a Microsoft technologist who discusses how to consider emerging technologies in the context of building sustainable enterprises. The blog posts relates the learning patterns organization adopt to newer data technologies replacing existing capability. The approach covers the strategic, organizational, architectural and technological challenges and changes with scaling enterprise analytics.

Three Horizon Model/Framework – strategic
Data Mesh Sociotechnical Paradigm – organizational
Data Lakehouse Architecture – architectural
Azure Cloud Scale Analytics Platform – technological

Read more by following this link:

https://techcommunity.microsoft.com/t5/data-architecture-blog/bring-vision-to-life-with-three-horizons-data-mesh-data/ba-p/3390414


https://geoparquet.org/ is worth checking out!

Recently, I watched a webinar hosted by TDWI, Databricks and Carto. The topic was Unlocking the Power of Spatial Analysis and Data Lakehouses. A copy of the webinar and the slide deck shared is available here. What I liked about the session was the use of Databricks and a Data Lake to provide Spatial Data. There was also a brief discussion on the role of the Open Geospatial Consortium. This group is working on the specifications for creating a geoparquet file. For anyone with an interest in GIS, Mapping, Data and Analytics this is worth checking out!

https://github.com/opengeospatial/geoparquet

ESRI Tapestry Segmentation

For the last 20 years, ESRI has been capturing geographic and demographic data. In the 90’s Acxiom (and others) came up with Lifestyle Segmentation. ESRI started in 1969 and today maps “everything” down to the household level.

I’m surprised by the accuracy of Tapestry Segmentation. The top three segments in my zip code are: “Top Tier”, “Comfortable Empty Nesters” and “In Style”. Pretty accurate – with one data point. I’m an Empty Nester who would like to be “In Style” or “Top Tier” but feel very lucky and fortunate to be “comfortable”.

The 45243 Zip Code

Here is a link to a PDF with more on Tapestry Segmentation and my personal segment.
Just in case you we’re interested here are the details on “Top Tier” and ‘In Style“.

What is Differential privacy?

Differential privacy seeks to protect individual data values by adding statistical “noise” to the analysis process. The math involved in adding the noise is complex, but the principle is fairly intuitive – the noise ensures that data aggregations stay statistically consistent with the actual data values allowing for some random variation, but make it impossible to work out the individual values from the aggregated data. In addition, the noise is different for each analysis, so the results are non-deterministic – in other words, two analyses that perform the same aggregation may produce slightly different results.

Understand Differential Privacy!