Digital Landscape Traces: Visualization

Alexander Dunkel, Leibniz Institute of Ecological Urban and Regional Development,
Transformative Capacities & Research Data Centre (IÖR-FDZ)

No description has been provided for this image
•••
Out[88]:

Last updated: Nov-04-2025, Carto-Lab Docker Version 1.1.0

This notebook creates the final high-resolution spatial visualization of social media posts in Germany. It builds upon all previous steps by combining the full-resolution user coordinates with the final user classifications.

The key objectives are:

  1. Full Resolution: To eliminate the grid-like artifacts from the second notebook by using the original, only slightly geohash-aggregated, post coordinates from the Parquet Files.
  2. Categorical Representation: To color each pixel on the map based on the majority user type (Local, Tourist, or Unclassified) present in that area.
  3. Geographic Context: To overlay Germany's national border for clear geographic reference.

The workflow uses a Dask/Parquet pipeline to handle the large volume of full-resolution coordinates efficiently.

Preparations

First, we import the necessary libraries and configure the notebook for development with automatic module reloading.

In [89]:
import os
import sys
import numpy as np
import pyarrow as pa
import xarray as xr
from pathlib import Path
import pandas as pd
import geopandas as gp
import dask.dataframe as dd
import dask.diagnostics as diag
import datashader as ds
import matplotlib.pyplot as plt
import datashader.transfer_functions as tf
from datashader.utils import lnglat_to_meters
from IPython.display import clear_output, display
from datashader.colors import rgb
import holoviews as hv
from holoviews.operation.datashader import datashade, dynspread
import hvplot.pandas
import xyzservices.providers as xyz
import hvplot.dask
import datashader as ds
from typing import Tuple, Optional
•••
List of package versions used in this notebook
package python geopandas pandas datashader dask matplotlib
version 3.12.11 1.1.1 2.3.3 0.18.2 2025.9.1 3.10.6

Load dependencies:

Activate autoreload of changed python files:

In [91]:
%load_ext autoreload
%autoreload 2
The autoreload extension is already loaded. To reload it, use:
  %reload_ext autoreload

Parameters

Define initial parameters that affect processing

In [92]:
OUTPUT = Path.cwd().parents[0] / "out"
WORK_DIR = Path.cwd().parents[0] / "tmp"
In [93]:
PARQUET_OUTPUT = OUTPUT / "de_classified_points.parquet"
PARQUET_OUTSIDE_DE = OUTPUT / "de_outside_background.parquet"

Categorical Visualization with Datashader

With the data prepared, we can now create our visualization. We will follow the method from the Census example, defining a color key and using ds.by() to aggregate the data by our classification column.

In [94]:
df = dd.read_parquet(PARQUET_OUTPUT)
df = df.persist()
print(f"Data loaded with {len(df):,} points.")
Data loaded with 66,632,117 points.
In [95]:
df.columns
Out[95]:
Index(['x', 'y', 'classification'], dtype='object')
In [96]:
df.head(20)
Out[96]:
x y classification
0 1483566.2819869858 6891334.7061789026 Tourist
1 761542.1128052111 6609989.5677423635 Tourist
2 1491821.4810417851 6896610.2142265579 Tourist
3 971132.4443631642 6064002.4931831649 Tourist
4 790282.4354404373 6540901.0447235527 Tourist
5 964253.1118174985 6464754.5782011142 Tourist
6 766281.2085588919 6698279.2409675419 Tourist
7 1376248.6942745985 6682111.0319848154 Local
8 790129.5613838669 6541141.6072320854 Tourist
9 677614.2557480875 6584557.5171321826 Tourist
10 931843.8118245837 6311709.4666351499 Tourist
11 776523.7703491056 6610959.9653587285 Unclassified
12 750535.1807321456 6667196.0950509030 Unclassified
13 1489069.7480235186 6894097.6356955748 Local
14 1506497.3904725390 6878537.1496626856 Tourist
15 960125.5122900989 6469523.1187174581 Unclassified
16 1195398.6853518714 6033140.7326419000 Unclassified
17 1127981.2264043461 7231226.3896553535 Tourist
18 1496560.5767954658 6893092.8241713196 Tourist
19 1098782.2815994087 7091972.9220264973 Local

Define the color key for our categories and the boundary for the map.

In [97]:
DE_BOUNDS            = ((4.605469, 16.37207), (46.697243, 55.685885))
DRESDEN_BOUNDS       = (( 13.415680,  14.703827), ( 50.740090, 51.194905))
BERLIN_BOUNDS        = (( 12.843018,  14.149704), ( 52.274880, 52.684292))
LEIPZIG_BOUNDS       = ((12.203201, 12.554764), (51.266002, 51.375322))
MUNICH_BOUNDS        = ((11.240593, 11.943718), (48.004232, 48.237774))
FRANKFURT_BOUNDS     = ((5.478882, 11.103882), (49.150450, 50.947349))
BALTIC_COAST_BOUNDS  = ((8.942322, 14.567322), (53.308202, 54.947941))

color_key = {
    'Local': 'blue',
    'Tourist': 'red',
    'Unclassified': 'cornflowerblue'
    # 'Unclassified': 'slategrey'
}
In [98]:
NUTS_GPKG_FILE = "NUTS_RG_01M_2024_4326.gpkg"
if not Path(OUTPUT / NUTS_GPKG_FILE).exists():
    tools.get_stream_file(
        f"https://gisco-services.ec.europa.eu/distribution/v2/nuts/gpkg/{NUTS_GPKG_FILE}", OUTPUT / NUTS_GPKG_FILE)
else: print("Already exists")
Already exists
In [99]:
nuts = gp.read_file(OUTPUT / NUTS_GPKG_FILE)
nuts1_de = nuts[
    # (nuts['CNTR_CODE'] == 'DE') &
    (nuts['LEVL_CODE'] == 0)
]

Initial Visualization: A "Winner-Takes-All" Approach

With the data prepared, our first step is to create a baseline visualization. The function below uses a "winner-takes-all" method: for each pixel on the map, it determines which category of user (Local, Tourist, or Unclassified) is most numerous and assigns that category's color to the pixel.

This approach is effective for quickly showing the dominant user type in any given area. However, as we will see, it has a key limitation: it treats a pixel with 10,000 points the same as a pixel with 10 points, hiding the crucial information about data density and activity levels.

In [100]:
def create_bordered_map(
    df, bounds: Tuple[Tuple[float, float]], border_gdf, color_key, plot_width=1200, background='white'
):
    """
    Generates a datashaded categorical map using a "winner-takes-all" method
    and overlays country borders, correctly zoomed to the target area.
    """
    x_range, y_range = bounds
    # Calculate height and project bounds (unchanged)
    lng_width = x_range[1] - x_range[0]
    lat_height = y_range[1] - y_range[0]
    plot_height = int(((plot_width * lat_height) / lng_width) * 1.5)
    x_coords, y_coords = lnglat_to_meters(x_range, y_range)

    # Create the Datashader canvas (unchanged)
    cvs = ds.Canvas(
        plot_width=plot_width, plot_height=plot_height,
        x_range=x_coords, y_range=y_coords
    )

    # Create the 3D (y, x, classification) aggregate object.
    print("Aggregating points by category...")
    with diag.ProgressBar():
        agg = cvs.points(df, x='x', y='y', agg=ds.by('classification'))

    # Use argmax() to find the integer index of the majority category for each pixel.
    # The result is a 2D array of integers (0 for 'Local', 1 for 'Tourist', etc.).
    print("Finding majority category for each pixel...")
    majority_indices = agg.argmax(dim='classification')

    # Mask out pixels that have no data points at all.
    majority_indices = majority_indices.where(agg.sum(dim='classification') > 0)

    # Create a simple list of colors in the correct order.
    # This order matches the integer indices from the argmax() step.
    cats = list(agg.coords['classification'].values)
    color_list = [color_key[cat] for cat in cats]

    print("Shading image...")
    img = tf.shade(majority_indices, cmap=color_list)

    # --- Create the Final Plot with Matplotlib ---
    print("Rendering final plot with borders...")
    fig, ax = plt.subplots(1, 1, figsize=(15, 15), facecolor=background)

    ax.imshow(img.to_pil(), extent=[x_coords[0], x_coords[1], y_coords[0], y_coords[1]])

    border_gdf.to_crs(epsg=3857).plot(
        ax=ax, facecolor='none', edgecolor='black', linewidth=0.5, alpha=0.7
    )

    ax.set_xlim(x_coords)
    ax.set_ylim(y_coords)
    ax.set_axis_off()
    plt.tight_layout()
    plt.show()

    return img
In [101]:
img = create_bordered_map(
    df,
    DE_BOUNDS,
    nuts1_de,
    color_key,
    background='white'
)
Aggregating points by category...
[########################################] | 100% Completed | 304.09 ms
Finding majority category for each pixel...
Shading image...
Rendering final plot with borders...
No description has been provided for this image

Advanced Visualization: Adding Density and Weighting

The simple map above is a good start, but it has a key limitation: every data point is rendered with the same intensity, regardless of whether a pixel represents a single post or ten thousand. This makes it difficult to distinguish high-activity urban centers from sparse rural areas.

The new create_bordered_map_alpha() function creates a much richer visualization by independently controlling three key visual elements:

  1. The 'Winner-Takes-All' Color: The color of each pixel (red, blue, or grey) is still determined by the majority category. Crucially, this version introduces a local_weight parameter. This allows us to apply a 'boost' to the 'Local' category to computationally balance the 1:2.4 data imbalance, giving a more representative view of local activity.

  2. The Data Density (Opacity): The opacity of each pixel is now scaled based on the total number of original data points it contains. We use a technique called histogram equalization (eq_hist) to ensure that the full range of transparency is used effectively. The result is that dense urban areas appear crisp and opaque, while sparse rural points are visible but faint, providing a clear visual guide to data validity and activity levels.

  3. A Solid Background: To ensure a correct and consistent output, the function uses a robust "White Canvas" method. It builds the final image on an opaque canvas of the desired background color, avoiding some complex blending errors that can occur when compositing transparent layers.

We also want to show data surrounding Germany, as supplementary information, without classification. This is done below, before defining the core create_bordered_map_alpha() method.

In [102]:
print("\nLoading background context data (outside Germany)...")
df_outside = dd.read_parquet(PARQUET_OUTSIDE_DE).persist()
print(f"-> Loaded {len(df_outside):,} background points.")
Loading background context data (outside Germany)...
-> Loaded 70,827,927 background points.

This data contains only the coordinates, no classification.

In [103]:
df_outside.head(5)
Out[103]:
x y
0 672875.1599944066 6598838.2244882751
1 672875.1599944066 6598838.2244882751
2 672875.1599944066 6598838.2244882751
3 672875.1599944066 6598838.2244882751
4 672875.1599944066 6598838.2244882751
In [104]:
def create_bordered_map_alpha(
    df, df_bg, bounds: Tuple[Tuple[float, float]], border_gdf, color_key, background='white',
    plot_width=2400, local_weight=1.0, return_raster: Optional[bool] = None
):
    """
    Creates a high-resolution static map of categorical point data.

    Renders a "winner-takes-all" color for each pixel with opacity scaled
    by point density, composited on a solid background color. Allows for
    re-weighting of the 'Local' category to adjust for class imbalance.
    Uses df_bg to add grayscale background data.
    """
    x_range, y_range = bounds
    # 1. Prepare a high-resolution canvas with the correct aspect ratio.
    lng_width = x_range[1] - x_range[0]
    lat_height = y_range[1] - y_range[0]
    plot_height = int(((plot_width * lat_height) / lng_width) * 1.5)
    x_coords, y_coords = lnglat_to_meters(x_range, y_range)
    cvs = ds.Canvas(
        plot_width=plot_width, plot_height=plot_height,
        x_range=x_coords, y_range=y_coords
    )

    # 2.1 Aggregate points by category to get counts for each pixel.
    with diag.ProgressBar():
        agg = cvs.points(df, x='x', y='y', agg=ds.by('classification'))

    # 2.2 Create BACKGROUND Layer
    print("Aggregating and shading background layer with density...")
    with diag.ProgressBar():
        agg_bg = cvs.points(df_bg, x='x', y='y')

    # Shade the background using a gradient and log scaling.
    img_bg = tf.shade(
        agg_bg,
        cmap='#000000',     # A single color for all points
        how='log',          # Use log scale to see both sparse and dense areas
        # span=[0, 250],    # Map the data range from 0 to [max_count] to the full alpha gradient.
        alpha=240,          # MAX opacity for the densest areas
        min_alpha=30        # MIN opacity for the sparsest areas
    )

    # 3. Determine the "winning" category, applying weights if necessary.
    cats = list(agg.coords['classification'].values)
    weighted_agg = agg.astype(np.float64) # Use a float copy for weighting

    if local_weight != 1.0 and 'Local' in cats:
        print(f"Applying a weight of {local_weight} to the 'Local' category...")
        local_idx = cats.index('Local')
        weighted_agg.data[:, :, local_idx] *= local_weight

    majority_indices = weighted_agg.argmax(dim='classification')

    # 4. Create a density-based alpha channel from original (unweighted) counts.
    original_total_counts = agg.sum(dim='classification')
    img_alpha = tf.shade(original_total_counts, cmap="white", how='eq_hist')
    alpha_channel = np.nan_to_num(img_alpha.data, nan=0).astype(np.uint8)

    # 5. Build the final image using the "White Canvas" method.
    height, width = alpha_channel.shape
    data_mask = original_total_counts.data > 0

    # Start with an opaque canvas of the specified background color.
    bg_color_tuple = rgb(background)
    final_rgba = np.full((height, width, 4), list(bg_color_tuple) + [255], dtype=np.uint8)

    # Paint data pixels with their majority category color.
    for i, cat in enumerate(cats):
        if cat in color_key:
            mask = (majority_indices.data == i) & data_mask
            final_rgba[mask, :3] = rgb(color_key[cat])

    # Apply the density-based alpha only to the data pixels.
    final_rgba[data_mask, 3] = alpha_channel[data_mask]

    final_image = tf.Image(final_rgba)

    # 6. Render the final, opaque image with Matplotlib.
    fig, ax = plt.subplots(1, 1, figsize=(15, 15), facecolor=background)

    ax.imshow(
        final_image.to_pil(),
        extent=[x_coords[0], x_coords[1], y_coords[0], y_coords[1]])
    ax.imshow(
        img_bg.to_pil(),
        extent=[x_coords[0], x_coords[1], y_coords[0], y_coords[1]])

    border_gdf.to_crs(epsg=3857).plot(
        ax=ax, facecolor='none', edgecolor='black', linewidth=0.5, alpha=0.7
    )
    ax.set_xlim(x_coords)
    ax.set_ylim(y_coords)
    ax.set_axis_off()
    plt.tight_layout()

    if return_raster:
        return fig, final_image, img_bg
    return fig
In [105]:
opts = {
    "df": df,
    "df_bg": df_outside,
    "bounds": DE_BOUNDS,
    "border_gdf": nuts1_de,
    "color_key": color_key,
    "background": 'white',
    "local_weight": 1.0,
    "return_raster": True,
}

fig, img_fg, img_bg = create_bordered_map_alpha(**opts)
[########################################] | 100% Completed | 513.80 ms
Aggregating and shading background layer with density...
[########################################] | 100% Completed | 205.32 ms
No description has been provided for this image

Save this image as the final map.

In [106]:
def save_image(fig, output_name, img_fg=None, img_bg=None, bounds=None):
    """
    Saves a Matplotlib figure (PNG) and optionally its constituent
    datashader layers (foreground and background) as GeoTIFFs.
    """
    # 1. Save the visual PNG from the matplotlib figure
    png_filepath = OUTPUT / f"{output_name}.png"
    relative_png_path = png_filepath.relative_to(Path.cwd().parents[1])
    print(f"Saving composite PNG to ./{relative_png_path}")
    fig.savefig(png_filepath, dpi=300, bbox_inches='tight', pad_inches=0)
    plt.close(fig) # Close the figure to free up memory

    # 2. Save the foreground GeoTIFF, if provided
    if img_fg is not None:
        tif_filepath_fg = OUTPUT / f'{output_name}_foreground.tif'
        print(
            f"Saving GeoTiff to ./"
            f"{tif_filepath_fg.relative_to(Path.cwd().parents[1])}")
        raster.save_datashader_to_geotiff(
            img=img_fg,
            filepath=tif_filepath_fg,
            bounds_ll=bounds
        )

    # 3. Save the background GeoTIFF, if provided
    if img_bg is not None:
        tif_filepath_bg = OUTPUT / f'{output_name}_background.tif'
        print(
            f"Saving GeoTiff to ./"
            f"{tif_filepath_bg.relative_to(Path.cwd().parents[1])}")
        raster.save_datashader_to_geotiff(
            img=img_bg,
            filepath=tif_filepath_bg,
            bounds_ll=bounds
        )
    
    print("Image saving complete.")
In [107]:
save_image(
    fig=fig,
    output_name='DE_locals_vs_tourists',
    img_fg=img_fg,
    img_bg=img_bg,
    bounds=DE_BOUNDS
)
Saving composite PNG to ./digital_traces_map/out/DE_locals_vs_tourists.png
Saving GeoTiff to ./digital_traces_map/out/DE_locals_vs_tourists_foreground.tif
Saving GeoTiff to ./digital_traces_map/out/DE_locals_vs_tourists_background.tif
Image saving complete.

Zoom into different regions

To look into details for selected areas, we can easily create several zoomed-in views of key regions across Germany.

In [21]:
opts["bounds"] = DRESDEN_BOUNDS
opts["return_raster"] = False

fig = create_bordered_map_alpha(**opts)
[########################################] | 100% Completed | 323.74 ms
Aggregating and shading background layer with density...
[########################################] | 100% Completed | 102.54 ms
No description has been provided for this image
In [22]:
opts["bounds"] = BERLIN_BOUNDS

fig = create_bordered_map_alpha(**opts)
[########################################] | 100% Completed | 307.38 ms
Aggregating and shading background layer with density...
[########################################] | 100% Completed | 102.26 ms
No description has been provided for this image
In [23]:
opts["bounds"] = LEIPZIG_BOUNDS

fig = create_bordered_map_alpha(**opts)
[########################################] | 100% Completed | 305.77 ms
Aggregating and shading background layer with density...
[########################################] | 100% Completed | 102.15 ms
No description has been provided for this image
In [24]:
opts["bounds"] = MUNICH_BOUNDS

fig = create_bordered_map_alpha(**opts)
[########################################] | 100% Completed | 305.29 ms
Aggregating and shading background layer with density...
[########################################] | 100% Completed | 101.98 ms
No description has been provided for this image
In [25]:
opts["bounds"] = FRANKFURT_BOUNDS

fig = create_bordered_map_alpha(**opts)
[########################################] | 100% Completed | 314.90 ms
Aggregating and shading background layer with density...
[########################################] | 100% Completed | 102.39 ms
No description has been provided for this image
In [26]:
opts["bounds"] = BALTIC_COAST_BOUNDS

fig = create_bordered_map_alpha(**opts)
[########################################] | 100% Completed | 305.83 ms
Aggregating and shading background layer with density...
[########################################] | 100% Completed | 103.72 ms
No description has been provided for this image

Create release file

In [108]:
!rm ../out/*.zip
rm: cannot remove '../out/*.zip': No such file or directory

This Bash command cleans up any previous ZIP files in the out/ directory. The ! indicates it's a shell command, not Python.

Prepare a release

Make sure that 7z is available. Carto-Lab Docker comes with 7z. If you are using this in a rootfull container, you can use !apt install p7zip-full. Otherwise (e.g. in Jupyter4NFDI Hub), we must retrieve the binary below.

In [109]:
%%bash
# Check if '7z' is already available globally or in ~/bin
if ! command -v 7z >/dev/null 2>&1 && [ ! -x "$HOME/bin/7z" ]; then
  echo "7z not found. Installing local copy..."
  mkdir -p ~/bin && cd ~/bin
  wget -q https://www.7-zip.org/a/7z2301-linux-x64.tar.xz
  tar -xf 7z2301-linux-x64.tar.xz
  ln -sf ~/bin/7zz ~/bin/7z
else
  echo "7z is already available."
fi
7z is already available.

Create a new release *.zip file

We want to create a ZIP file with the current release version in the name. We can get this with the following command:

In [114]:
!git describe --tags --abbrev=0
v0.10.5

Create the release file:

In [115]:
%%bash
export PATH="$HOME/bin:$PATH" \
    && cd .. && git config --local --add safe.directory '*' \
    && RELEASE_VERSION=$(git describe --tags --abbrev=0) \
    && 7z a -tzip -mx=9 out/release_$RELEASE_VERSION.zip \
    py/* out/* md/* resources/* *.bib notebooks/*.ipynb \
    *.md *.yml *.ipynb nbconvert.tpl conf.json pyproject.toml \
    -x!out/user_classification.csv -x!out/user_home_locations.csv \
    -x!py/__pycache__ -x!py/modules/__pycache__ -x!py/modules/.ipynb_checkpoints \
    -y > /dev/null
  • export PATH="$HOME/bin:$PATH" This ensures that any binary (e.g. 7z) previously retrieved is accessible (see cell above)
  • git config --local --add safe.directory '*' Ensures Git doesn't prompt for confirmation when working with repositories owned by different users.
  • RELEASE_VERSION is the bash variable that holds the value.
  • 7z a -tzip -mx=9 out/release_$RELEASE_VERSION.zip Uses the 7z archiving tool (7z a) to create a ZIP archive (-tzip) named out/release_ followed by the retrieved version number ($RELEASE_VERSION). -mx=9 sets the compression level to maximum.
  • With py/* out/* resources/* notebooks/*.ipynb (etc.) we explicitly select the folders and files that we want to include in the release. Note that we explicitly include the 00_data/ directory, which is not committed to the git repository itself (due to the .gitignore file).
  • -x!out/user_classification.csv -x!out/user_home_locations.csv excludes some of the intermediate data files that are not strictly necessary to replicate the final map, for increased privacy preservation
  • At the end, we exclude a number of temporary files that we do not need to archive (-x!py/__pycache__ -x!py/modules/__pycache__ etc.) and turn off any output logging by piping to /dev/null.

Next, we check the generated file:

In [116]:
!RELEASE_VERSION=$(git describe --tags --abbrev=0) \
    && ls -alh ../out/release_$RELEASE_VERSION.zip
-rw-r--r-- 1 root root 825M Nov  4 08:41 ../out/release_v0.10.5.zip

For the ioerDATA upload, we also generate a list of the files below.

In [118]:
ignore_files_folders = []
ignore_match = []
tools.tree(
    Path.cwd().parents[0],
    ignore_files_folders=ignore_files_folders, ignore_match=ignore_match)
Out[118]:
Directory file tree
.
├── out
│ ├── user_classification.csv
│ ├── DE_locals_vs_tourists_foreground.tif
│ ├── release_v0.10.5.zip
│ ├── user_home_locations.csv
│ ├── de_classified_points.parquet
│ │ ├── part.7.parquet
│ │ ├── part.12.parquet
│ │ ├── part.0.parquet
│ │ ├── part.13.parquet
│ │ ├── part.9.parquet
│ │ ├── part.5.parquet
│ │ ├── part.6.parquet
│ │ ├── part.2.parquet
│ │ ├── part.8.parquet
│ │ ├── part.10.parquet
│ │ ├── part.1.parquet
│ │ ├── part.3.parquet
│ │ ├── part.11.parquet
│ │ └── part.4.parquet
│ ├── de_outside_background.parquet
│ │ ├── part.7.parquet
│ │ ├── part.12.parquet
│ │ ├── part.0.parquet
│ │ ├── part.13.parquet
│ │ ├── part.9.parquet
│ │ ├── part.5.parquet
│ │ ├── part.6.parquet
│ │ ├── part.2.parquet
│ │ ├── part.8.parquet
│ │ ├── part.10.parquet
│ │ ├── part.1.parquet
│ │ ├── part.3.parquet
│ │ ├── part.11.parquet
│ │ ├── part.14.parquet
│ │ └── part.4.parquet
│ ├── svg
│ │ └── barplot_locals_tourists_de.svg
│ ├── NUTS_RG_01M_2024_4326.gpkg
│ ├── figures
│ │ └── barplot_locals_tourists_de.png
│ ├── DE_locals_vs_tourists.png
│ ├── shapes
│ │ ├── ne_110m_admin_0_countries.shx
│ │ ├── ne_110m_admin_0_countries.VERSION.txt
│ │ ├── ne_110m_admin_0_countries.prj
│ │ ├── ne_110m_admin_0_countries.dbf
│ │ ├── ne_110m_admin_0_countries.README.html
│ │ ├── ne_110m_admin_0_countries.cpg
│ │ └── ne_110m_admin_0_countries.shp
│ └── DE_locals_vs_tourists_background.tif
├── py
│ ├── _03_visualization.py
│ ├── _02_parquet.py
│ ├── _00_user_origin_conversion.py
│ ├── modules
│ │ └── base
│ │ ├── tools.py
│ │ ├── raster.py
│ │ ├── README.md
│ │ ├── .gitignore
│ │ ├── grid.py
│ │ ├── hll.py
│ │ ├── pkginstall.sh
│ │ └── preparations.py
│ └── _01_clustering.py
├── nbconvert.tpl
├── README.md
├── .pandoc
│ ├── readme.css
│ ├── favicon-32x32.png
│ ├── favicon-16x16.png
│ └── readme.html
├── notebooks
│ ├── 00_user_origin_conversion.ipynb
│ ├── 02_parquet.ipynb
│ ├── 03_visualization.ipynb
│ ├── .gitkeep
│ └── 01_clustering.ipynb
├── pyproject.toml
├── .gitignore
├── resources
│ ├── html
│ │ ├── 03_visualization.html
│ │ ├── 01_clustering.html
│ │ ├── 02_parquet.html
│ │ └── 00_user_origin_conversion.html
│ └── digital_traces_map.png
├── jupytext.toml
├── .gitlab-ci.yml
├── CHANGELOG.md
├── LICENSE.md
├── .gitmodules
├── 00_data
│ ├── 2025-09-19_userdays_DE_HLL.csv
│ ├── 2025-10-18_DE_outside_buffer_coords.csv
│ └── 2025-09-30_DE_coords_user.csv
├── .templates
│ └── CHANGELOG.md.j2
├── md
│ ├── 00_user_origin_conversion.md
│ ├── 01_clustering.md
│ ├── 02_parquet.md
│ └── 03_visualization.md
├── conf.json
├── tmp
└── .version
17 directories, 90 files

Create notebook HTML

In [ ]:
!jupyter nbconvert --to html_toc \
    --output-dir=../resources/html/ ./03_visualization.ipynb \
    --template=../nbconvert.tpl \
    --ExtractOutputPreprocessor.enabled=False >&- 2>&- # create single output file
In [ ]:
 

IOER FDZ Jupyter Base Template v0.13.0