import requests
from bs4 import BeautifulSoup
url = "https://www.amazon.com/s?k=laptop"
response = requests.get(url)
soup = BeautifulSoup(response.content, "html.parser")
products = soup.find_all("div", class_="s-result-item")
for product in products:
title = product.find("a", class_="a-link-normal").text
price = product.find("span", class_="a-price").text
print(f"Title: {title}")
print(f"Price: {price}")
This program first imports the requests
and BeautifulSoup
libraries. The requests
library is used to make HTTP requests to the website, and the BeautifulSoup
library is used to parse the HTML response.
The next step is to get the response from the website. This is done by calling the get()
method on the requests
object.
The response is then parsed using the BeautifulSoup
object. The find_all()
method is used to find all the elements with the class name s-result-item
. These elements represent the individual products on the page.
For each product, the title and price are extracted and printed to the console.
This is just a simple example of web scraping in Python. There are many other things that can be done with web scraping, such as extracting more complex data, scraping multiple pages, and automating the scraping process.
Comments
Post a Comment