8000 Merge branch 'master' into reorder-ch8-challenges · rakash/python-basics-exercises@73de52b · GitHub
[go: up one dir, main page]

Skip to content

Commit 73de52b

Browse files
committed
Merge branch 'master' into reorder-ch8-challenges
2 parents e620f4f + 3732d95 commit 73de52b

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

ch08-conditional-logic/9-challenge.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
total_A_wins = 0
1010
total_B_wins = 0
1111

12-
trials = 100_000
12+
trials = 10_000
1313
for trial in range(0, trials):
1414
A_win = 0
1515
B_win = 0

ch13-interact-with-pdf-files/1-work-with-the-contents-of-a-pdf-file.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
# Exercise 1
1010
path = "C:/python-basics-exercises/ch13-interact-with-pdf-files/\
11-
practice_files"
11+
practice_files"
1212

1313
input_file_path = os.path.join(path, "The Whistling Gypsy.pdf")
1414
input_file = PdfFileReader(input_file_path)

ch15-interacting-with-the-web/2-use-an-html-parser-to-scrape-websites.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
address = base_URL + "/profiles"
1414
html_page = urlopen(address)
1515
html_text = html_page.read().decode("utf-8")
16-
soup = BeautifulSoup(html_text, features="html.parser")
16+
soup = BeautifulSoup(html_text, "html.parser")
1717

1818
# Exercise 2
1919
# Parse out all the values of the page links
@@ -26,5 +26,5 @@
2626
# Display the text in the HTML page of each link
2727
link_page = urlopen(link_address)
2828
link_text = link_page.read().decode("utf-8")
29-
link_soup = BeautifulSoup(link_text, features="html.parser")
29+
link_soup = BeautifulSoup(link_text, s"html.parser")
3030
print(link_soup.get_text())

0 commit comments

Comments
 (0)
0