Part One
One Card Becomes One Dictionary
The natural unit of scraped data is a dictionary. One article card becomes one dictionary with named fields like title, link, author, and published_at.
Part Two
Many Cards Become a List of Dicts
The next step is a loop: find all article cards, create one dictionary per card, and append() each dictionary to a list.
This is the core data-collection loop of the whole book: find many containers, loop over them, build dictionaries, append them to a list.
for loops, if guards, dictionaries, and append(). The scraper works because these basic tools combine well.
Part Three
List of Dicts to DataFrame
Pandas reads a list of dictionaries directly. Each dictionary becomes one row. Each key becomes one column.
Part Four
Saving the First CSV
A DataFrame can be written to disk with to_csv(). This is the point where scraping becomes useful outside the notebook.
The CSV file will contain one row per article and two columns: title and link.
/society/.... Add the base URL before saving so the CSV contains complete links that work everywhere.
Part Five
Your Turn — Scrape One Page
The snapshot below contains four article cards. Extract the titles and links, build a DataFrame, and preview the CSV text.
Chapter Navigation
Move between chapters.