In this assignment, you'll scrape text from The California Aggie and then analyze the text.
The Aggie is organized by category into article lists. For example, there's a Campus News list, Arts & Culture list, and Sports list. Notice that each list has multiple pages, with a maximum of 15 articles per page.
The goal of exercises 1.1 - 1.3 is to scrape articles from the Aggie for analysis in exercise 1.4.
Exercise 1.1. Write a function that extracts all of the links to articles in an Aggie article list. The function should:
Have a parameter url
for the URL of the article list.
Have a parameter page
for the number of pages to fetch links from. The default should be 1
.
Return a list of aricle URLs (each URL should be a string).
Test your function on 2-3 different categories to make sure it works.
Hints:
Be polite to The Aggie and save time by setting up requests_cache before you write your function.
Start by getting your function to work for just 1 page. Once that works, have your function call itself to get additional pages.
You can use lxml.html or BeautifulSoup to scrape HTML. Choose one and use it throughout the entire assignment.
import re
import requests
import requests_cache
from bs4 import BeautifulSoup
requests_cache.install_cache('davis_aggie_cache')
def url_scraper(url, numPages = 1):
'''
This function takes in an URL from an California Aggie article list and returns a list of article URLs as strings.
Input:
url: The URL of an California Aggie article list, the URL must include a blackslash at the end
numPages: the number of pages you want to search
Output:
list of strings of article URLs
'''
#initialize final list outside of for loop
url_list = []
#loop through 1 to numPages
for i in range(1, numPages + 1):
#get right URL
url2 = url + r'page/%s' % i
#BeautifulSoup content and make request to Aggie
aggie = BeautifulSoup(requests.get(url2).content, "lxml")
#https://www.crummy.com/software/BeautifulSoup/bs4/doc/
#for loop the only section to final all a href
for link in aggie.section.find_all('a'):
possible = link.get('href')
#basically the aggie starts articles with the year, otherwise the links are for navigation purposes
if 'https://theaggie.org/2' in possible:
url_list.append(possible)
#make list unique and sorted it
return sorted(list(set(url_list)))
urllist = url_scraper('https://theaggie.org/city/', 3)
urllist
Exercise 1.2. Write a function that extracts the title, text, and author of an Aggie article. The function should:
Have a parameter url
for the URL of the article.
For the author, extract the "Written By" line that appears at the end of most articles. You don't have to extract the author's name from this line.
Return a dictionary with keys "url", "title", "text", and "author". The values for these should be the article url, title, text, and author, respectively.
For example, for this article your function should return something similar to this:
{
'author': u'Written By: Bianca Antunez \xa0\u2014\xa0city@theaggie.org',
'text': u'Davis residents create financial model to make city\'s financial state more transparent To increase transparency between the city\'s financial situation and the community, three residents created a model called Project Toto which aims to improve how the city communicates its finances in an easily accessible design. Jeff Miller and Matt Williams, who are members of Davis\' Finance and Budget Commission, joined together with Davis entrepreneur Bob Fung to create the model plan to bring the project to the Finance and Budget Commission in February, according to Kelly Stachowicz, assistant city manager. "City staff appreciate the efforts that have gone into this, and the interest in trying to look at the city\'s potential financial position over the long term," Stachowicz said in an email interview. "We all have a shared goal to plan for a sound fiscal future with few surprises. We believe the Project Toto effort will mesh well with our other efforts as we build the budget for the next fiscal year and beyond." Project Toto complements the city\'s effort to amplify the transparency of city decisions to community members. The aim is to increase the understanding about the city\'s financial situation and make the information more accessible and easier to understand. The project is mostly a tool for public education, but can also make predictions about potential decisions regarding the city\'s financial future. Once completed, the program will allow residents to manipulate variables to see their eventual consequences, such as tax increases or extensions and proposed developments "This really isn\'t a budget, it is a forecast to see the intervention of these decisions," Williams said in an interview with The Davis Enterprise. "What happens if we extend the sales tax? What does it do given the other numbers that are in?" Project Toto enables users, whether it be a curious Davis resident, a concerned community member or a city leader, with the ability to project city finances with differing variables. The online program consists of the 400-page city budget for the 2016-2017 fiscal year, the previous budget, staff reports and consultant analyses. All of the documents are cited and accessible to the public within Project Toto. "It\'s a model that very easily lends itself to visual representation," Mayor Robb Davis said. "You can see the impacts of decisions the council makes on the fiscal health of the city." Complementary to this program, there is also a more advanced version of the model with more in-depth analyses of the city\'s finances. However, for an easy-to-understand, simplistic overview, Project Toto should be enough to help residents comprehend Davis finances. There is still more to do on the project, but its creators are hard at work trying to finalize it before the 2017-2018 fiscal year budget. "It\'s something I have been very much supportive of," Davis said. "Transparency is not just something that I have been supportive of but something we have stated as a city council objective [ ] this fits very well with our attempt to inform the public of our challenges with our fiscal situation." ',
'title': 'Project Toto aims to address questions regarding city finances',
'url': 'https://theaggie.org/2017/02/14/project-toto-aims-to-address-questions-regarding-city-finances/'
}
Hints:
The author line is always the last line of the last paragraph.
Python 2 displays some Unicode characters as \uXXXX
. For instance, \u201c
is a left-facing quotation mark.
You can convert most of these to ASCII characters with the method call (on a string)
.translate({ 0x2018:0x27, 0x2019:0x27, 0x201C:0x22, 0x201D:0x22, 0x2026:0x20 })
If you're curious about these characters, you can look them up on this page, or read more about what Unicode is.
def extract_aggie(url):
'''
This function takes in an URL from an California Aggie article and returns a dictionary with keys "url", "title", "text",
and "author" and the values for that dictionary keys
Input:
url: The URL of an California Aggie article
Output:
a dictionary with keys "url", "title", "text", and "author" and the values for that dictionary keys
'''
#Request and BeautifulSoup
aggie = BeautifulSoup(requests.get(url).content, 'lxml')
#get title
title = aggie.title.string.split(' |', 1)[0].translate({ 0x2018:0x27, 0x2019:0x27, 0x201C:0x22, 0x201D:0x22, 0x2026:0x20 })
#find articleBody
aggie = aggie.find(itemprop = 'articleBody')
#from http://stackoverflow.com/questions/40660273/in-beautifulsoup-ignore-children-elements-while-getting-parent-element-data
#get rid of picture stuff
for figure in aggie.find_all('figure'):
figure.decompose()
#author from last line
if 'Written' in aggie.find_all()[-1].parent.parent.text:
author = aggie.find_all()[-1].parent.parent.text
trash, part, author = author.strip().partition('Written')
author = part.strip() + author
else:
author = ''
#get rid of '\n' and translate
aggie = aggie.get_text().translate({ 0x2018:0x27, 0x2019:0x27, 0x201C:0x22, 0x201D:0x22, 0x2026:0x20 }).strip('\n').replace('\n', ' ')
aggie = aggie.replace(author, '')
return {'author': author, 'text': aggie, 'title': title, 'url': url}
article = extract_aggie('https://theaggie.org/2017/02/14/project-toto-aims-to-address-questions-regarding-city-finances/')
article
Exercise 1.3. Use your functions from exercises 1.1 and 1.2 to get a data frame of 60 Campus News articles and a data frame of 60 City News articles. Add a column to each that indicates the category, then combine them into one big data frame.
The "text" column of this data frame will be your corpus for natural language processing in exercise 1.4.
import pandas as pd
import numpy as np
#get urls
campus = url_scraper('https://theaggie.org/campus/', 4)
city = url_scraper('https://theaggie.org/city/', 4)
#make dataframes
campus = pd.DataFrame(campus, columns = ['url'])
campus['category'] = 'campus'
city = pd.DataFrame(city, columns = ['url'])
city['category'] = 'city'
#final df
davisNews = city.append(campus)
#list to store all dict
article_list = []
#get all articles
for i in davisNews['url']:
article_dict = extract_aggie(i)
article_list.append(article_dict)
#final df
article_df = pd.DataFrame(article_list)
article_df = article_df.merge(davisNews)
article_df
Exercise 1.4. Use the Aggie corpus to answer the following questions. Use plots to support your analysis.
What topics does the Aggie cover the most? Do city articles typically cover different topics than campus articles?
What are the titles of the top 3 pairs of most similar articles? Examine each pair of articles. What words do they have in common?
Do you think this corpus is representative of the Aggie? Why or why not? What kinds of inference can this corpus support? Explain your reasoning.
Hints:
The nltk book and scikit-learn documentation may be helpful here.
You can determine whether city articles are "near" campus articles from the similarity matrix or with k-nearest neighbors.
If you want, you can use the wordcloud package to plot a word cloud. To install the package, run
conda install -c https://conda.anaconda.org/amueller wordcloud
in a terminal. Word clouds look nice and are easy to read, but are less precise than bar plots.
from wordcloud import WordCloud
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer
from nltk.corpus import stopwords
import matplotlib.pyplot as plt
text = [i for i in article_df['title']]
text = ' '.join(text)
wordcloud = WordCloud().generate(text)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
The California Aggie covers various topics including around Davis and Yolo county. From the wordcloud above, some of its main topics in the past couple of months have included Police Logs, which they seem to do every week, and talking about student protests, and ASUCD senate campaigns. Some other reoccuring subjects are food, the controversy behind the former Chancellor, and the 2016 Presidential election and its outcome.
city_df = article_df.loc[article_df['category'] == 'city']
city_text = [i for i in city_df['title']]
city_text = ' '.join(city_text)
wordcloud = WordCloud().generate(city_text)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
campus_df = article_df.loc[article_df['category'] == 'campus']
campus_text = [i for i in campus_df['title']]
campus_text = ' '.join(campus_text)
wordcloud = WordCloud().generate(campus_text)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
From the two wordclouds above, we can see the difference in topics covered by the City and Campus sections of the The California Aggie. The campus section talks more about new happenings on campus, like the the ASUCD Senate race, our new Chancellor, and student protests on campus. The city section is heavy with police logs and more city-wide issues like the vandalism at the Islamic Center along with stuff about Yolo county and Sacramento. While there is some overlap between the two sections, it occurs when an issue or event impacts both the city and campus of Davis. One word in both wordclouds was 'protest', which is an event that impacts both the city and the campus.
This corpus is not very representative of the California Aggie, because a city newspaper comments on the happenings of a city and the impact that worldwide events have on that city. Since the events that draw the public's attention always change, a newspaper has to make new articles on a daily basis to cover them. Because of this, the past articles, especially only the articles of the past few months, are not very representative of the future articles the Aggie might write. The kinds of inference that this corpus can support are the articles that are published every week, like the Police Logs, or annual events or articles like stuff about the ASUCD Senate elections.