1. Python Tutorial on Using a Phone Camera
- Have you ever considered the potential of using the camera on your phone for computer vision tasks? You can, indeed! I’ll demonstrate how to use the camera on your phone with Python for computer vision in this tutorial.
Python’s method for leveraging a phone camera: Start by using pip install opencv-python to install the OpenCV library in Python. On your cellphones, download and install the IP Webcam app.
Make sure your phone and PC are linked to the same network after installing the IP Webcam app. Launch the server by running the app on your phone.
Your camera will then start up and display an IP address at the bottom. In order to open your phone’s camera in our Python code, we must have the IP address.
import cv2
import numpy as np
url = "https://[2405:201:1017:e00e:da32:e3ff:fe6c:ccfb]:8080/video"
cp = cv2.VideoCapture(url)
while(True):
camera, frame = cp.read()
if frame is not None:
cv2.imshow("Frame", frame)
q = cv2.waitKey(1)
if q==ord("q"):
break
cv2.destroyAllWindows()
In short, a phone camera can be a useful tool for computer vision applications when combined with Python. We can quickly and simply capture photos and videos from the phone’s camera and use them for various computer vision applications with the help of the OpenCV and numpy libraries.
2. How to Use Python to Build a Music Player GUI ?
- A fun and engaging project for individuals who are interested in both music and programming is building a music player Interface in Python. Two of the more well-known Python GUI frameworks are Pygame and Tkinter. There are more possibilities as well.
- A music player can be made with Python using the sound component of Pygame, a library that is primarily used for making video games. The Tkinter GUI library, on the other hand, may be used to develop user interfaces for Python programmes.
- We must first select a GUI framework before creating functions like play, stop, pause, and resume in Python to develop a music player GUI.
import os
import tkinter as tkr
from tkinter.filedialog import askdirectory
# pip install pygame
import pygame
music_player = tkr.Tk()
music_player.title("My Music Player")
music_player.geometry("450x350")
directory = askdirectory()
os.chdir(directory)
song_list = os.listdir()
play_list = tkr.Listbox(music_player, font="Helvetica 12 bold", bg='yellow', selectmode=tkr.SINGLE)
for item in song_list:
pos = 0
play_list.insert(pos, item)
pos += 1
pygame.init()
pygame.mixer.init()
def play():
pygame.mixer.music.load(play_list.get(tkr.ACTIVE))
var.set(play_list.get(tkr.ACTIVE))
pygame.mixer.music.play()
def stop():
pygame.mixer.music.stop()
def pause():
pygame.mixer.music.pause()
def unpause():
pygame.mixer.music.unpause()
Button1 = tkr.Button(music_player, width=5, height=3, font="Helvetica 12 bold", text="PLAY", command=play, bg="blue",
fg="white")
Button2 = tkr.Button(music_player, width=5, height=3, font="Helvetica 12 bold", text="STOP", command=stop, bg="red",
fg="white")
Button3 = tkr.Button(music_player, width=5, height=3, font="Helvetica 12 bold", text="PAUSE", command=pause,
bg="purple", fg="white")
Button4 = tkr.Button(music_player, width=5, height=3, font="Helvetica 12 bold", text="UNPAUSE", command=unpause,
bg="orange", fg="white")
var = tkr.StringVar()
song_title = tkr.Label(music_player, font="Helvetica 12 bold", textvariable=var)
song_title.pack()
Button1.pack(fill="x")
Button2.pack(fill="x")
Button3.pack(fill="x")
Button4.pack(fill="x")
play_list.pack(fill="both", expand="yes")
music_player.mainloop()
- Whether you’re a novice or seasoned programmer, using Python to design a music player GUI can be a terrific way to develop new skills and produce something enjoyable and helpful. Why don’t you give it a shot and see what type of music player you can make with Python?
3. Creating Pencil Sketches with Python
- Use Python to draw a pencil sketch by following these easy steps:
- OpenCV library import.
- Read the photo that you wish to turn into a sketch.
- Make the picture grayscale.
- The grayscale image is inverted.
- To the inverted image, apply Gaussian blur.
- Invert the hazy picture.
- By the inverted blurred image, divide the grayscale image.
- Show and save the pencil drawing.
import cv2
img = cv2.imread("image.jpg")
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
inv_gray_image = 255-gray_image
blur_inv_gray_image = cv2.GaussianBlur(inv_gray_image, (19, 19), 0)
inv_blur_image = 255-blur_inv_gray_image
sketch = cv2.divide(gray_image, inv_blur_image, scale=256.0)
cv2.imshow("Original image", img)
cv2.imshow("Pencil image", sketch)
cv2.imwrite("./Pencilart.png",sketch)
cv2.waitKey(0)
- Python makes it simple and enjoyable to create pencil sketches, yet it only requires a small amount of code. Try out various filters and photos to produce original pencil sketches.
4. Web Scraping with Python and BeautifulSoup: A Practical Guide
- A useful tool for obtaining data from websites is web scraping. The BeautifulSoup package with Python makes it simple to build efficient web scrapers.
- In this blog post, we’ll look at how to build a web scraper that pulls articles from a news website using Python and BeautifulSoup.
- You must set up Python and the BeautifulSoup library before you can begin.
- Python can be downloaded and installed on your computer from the official website.
- Once Python is installed, you may use pip, a package manager for Python, to install BeautifulSoup. Type the following command into your terminal once it is open:
pip install beautifulsoup4
- Understanding the Code Let’s examine the code that will be used to build our web scraper in more detail now that Python and BeautifulSoup have been installed.
- The Python code is composed of just one class called Scraper.
- The site variable is set to the website we wish to scrape by the init method, which also initialises the class.
- The website is opened, its contents are read, and the HTML is parsed using BeautifulSoup in the scrape technique.
- Next it publishes the URLs of all the anchor tags that have the word “articles” in their href property.
- Make sure you customize the code to scrape different websites or extract different types of data. For example, you can modify the code to extract the article titles, dates, or authors instead of the URLs.
- You can also use different search criteria to find the data you want to extract.
import urllib.request
from bs4 import BeautifulSoup
class Scraper:
def __init__(self, site):
self.site = site
def scrape(self):
r = urllib.request.urlopen(self.site)
html = r.read()
parser = "html.parser"
sp = BeautifulSoup(html,parser)
for tag in sp.find_all("a"):
url = tag.get("href")
if url is None:
continue
if "articles" in url:
print("\n" + url)
news = "https://news.google.com/"
Scraper(news).scrape()
- Online scraping is a useful method for quickly and simply gathering data from webpages. It is simple to build efficient web scrapers that automate the data collection process using Python and BeautifulSoup.
- You can extract the data you need and utilise it for data analysis, research, or business intelligence by tailoring the code to your needs. With the help of this useful manual, you may begin investigating web scraping’s limitless potential and advance your data analysis.
5. Scraping Twitter without API
- For this, Pandas, the collections module, and the twint library are employed.
- The list of users that the input user follows may be obtained by sending a request to Twitter with a specified username using the twint library.
- The list of users that the input user is following is returned by the get _ followings() function, which accepts a username as input.
- The list of usernames is returned by this function after retrieving the followers list using the twint library.
import twint
import pandas as pd
from collections import Counter
- The Counter module is used to identify the users who appear most frequently in lists of all input users’ followers.
- The relationship between the input users and their followers is recorded in the following relations dictionary. In order to display this dictionary, a pandas DataFrame is created.
users = [
'shakira',
'KimKardashian',
'rihanna',
'jtimberlake',
'KingJames',
'neymarjr',
]
- The list of accounts that the input username follows on Twitter is extracted using the twint library by the Python method get followings(username). The function begins by initialising the twint configuration object c. The username entered is the value for the object’s Username attribute.
def get_followings(username):
c = twint.Config()
c.Username = username
c.Pandas = True
twint.run.Following(c)
list_of_followings = twint.storage.panda.Follow_df
return list_of_followings['following'][username]
- The code constructs an empty dictionary called followings and an empty list called following list after defining the list of users.
- The get followings function is called with the username as an argument by the for loop as it iterates through each person in the users list.
- The user is used as the key in a following dictionary that stores the function’s return value, which is a list of accounts that the input user follows. The following list also has a list of accounts appended to it.
- The code outputs an error message and moves on to the next user in the loop if a KeyError exception is raised during the get followings function, indicating the account does not exist or is private.
followings = {}
following_list = []
for person in users:
print('#####\nStarting: ' + person + '\n#####')
try:
followings[person] = get_followings(person)
following_list = following_list + followings[person]
except KeyError:
print('IndexError')
- The code utilises the Counter function from the collections library to count the incidence of each account in the following list after retrieving a list of all the accounts that our users have followed.
- A list of the top 10 accounts that our users follow is returned by calling the most common(10) method on the Counter object.
Counter(following_list).most_common(10)
- The next step is to create a Pandas DataFrame to show this data in a more user-friendly manner after creating the follow relations dictionary, where each key represents a user and its value is a list of boolean values indicating whether they follow the other users or not.
- The follow relations dictionary is the first input given to the pd.DataFrame() function, which then generates a new DataFrame.
- The list of keys from the follow relations dictionary, which contains the names of every user in our analysis, is what we supply as the index argument as well.
follow_relations ={}
for following_user in followings.keys():
follow_relation_list = []
for followed_user in followings.keys():
if followed_user in followings[following_user]:
follow_relation_list.append(True)
else:
follow_relation_list.append(False)
follow_relations[following_user] = follow_relation_list
- Users are both rows and columns in the resulting DataFrame follow df. Whether or not the matching row user follows the column user can be determined by looking at the intersection of each row and column. True is the value.
following_df = pd.DataFrame.from_dict(follow_relations,
orient='index', columns=followings.keys())
following_df
- With the help of the Twint library and Python programming language, we can quickly get data on Twitter users’ followers and followings. We can learn more about the most popular accounts among a set of users and investigate the potential connections between them by evaluating this data.
- We can change this data into a more approachable format with the aid of pandas and other libraries, making it simpler to read and reach useful conclusions. Overall, this method has several applications, including social media analysis and market research.
6. Exploring Keyword Research with Python's Pytrends Library
- Businesses are using data today to gain insightful information that can help them make strategic decisions. In this situation, data science is useful. An interdisciplinary field called data science uses statistical and computational techniques to draw conclusions from data.
- We will examine how to use Python to conduct keyword research using the Google Trends API in this blog post. In particular, the Google Trends API will be queried using the pytrends package, and the output will be visualised using the pandas and matplotlib libraries.
- We begin by importing the relevant libraries, including matplotlib.pyplot, pandas, and pytrends. Then, in order to communicate with the Google Trends API, we construct a pytrends object.
import pandas as pd
from pytrends.request import TrendReq
import matplotlib.pyplot as plt
trends = TrendReq()
- Create the Payload Then, a payload for the keyword “Data Science” is constructed. Information like the keyword(s) we want to query, the time period we want to query, the location we want to query, and the category we want to query are all contained in the payload. The payload is then constructed using the pytrends object.
trends.build_payload(kw_list=["Data Science"])
data = trends.interest_by_region()
print(data.sample(10))
- See the Outcomes Lastly, we use pandas and matplotlib to visualise the findings. The dataframe is randomly selected for 15 rows, the index is reset, and a bar plot with the axes “geoName” and “Data Science” is made.
df = data.sample(15)
df.reset_index().plot(x="geoName", y="Data Science", figsize=(120,16), kind="bar")
plt.show()
- Obtain Trending Searches in India in Step 5 The top 10 trending searches in India are then printed using pytrends.
data = trends.trending_searches(pn="india")
print(data.head(10))
- Get “Programming” Keyword Suggestions The top five choices for the term “Programming” are then printed using pytrends.
keyword = trends.suggestions(keyword="Programming")
data = pd.DataFrame(keyword)
print(data.head())
- In this article, we learnt how to conduct keyword research using Python and the Google Trends API. We have looked at how to query the Google Trends API using the pytrends library, and how to display the results using the pandas and matplotlib libraries. Also, we now know how to collect keyword ideas and trending search terms. With this information, you can begin examining market patterns and using the insights to guide your business decisions.
7. The definitive manual for using Python to scrape Wikipedia: Unleashing the Potential of Data Mining
- One of the biggest online encyclopaedias, Wikipedia has a tonne of information on a variety of subjects. Python offers a practical method for efficiently scraping Wikipedia for data mining needs. We’ll look at how to scrape Wikipedia with Python and the Wikipedia library in this blog post.
Using pip, we first install the Wikipedia library. After installation, we can use the search feature to find a list of Wikipedia articles that pertain to our search term.
pip install wikipedia
import wikipedia as wiki
- We can utilise the page function to get additional specific information about a particular page, such as its title, URL, content, photos, and links.
print(wiki.search("Python"))
print(wiki.summary("Python"))
wiki.set_lang("fr")
print(wiki.summary("Python"))
wiki.set_lang("en")
p = wiki.page("Python")
print(p.title)
print(p.url)
print(p.content)
print(p.images)
print(p.links)
We can utilise the page function to get additional specific information about a particular page, such as its title, URL, content, photos, and links. This enables us to access and extract specific data for our data mining needs from a Wikipedia page.
For the aim of scraping Wikipedia for data mining, Python and the Wikipedia library make a potent combination. We can improve our data analysis skills and acquire significant insights by utilising the large amount of material on Wikipedia.
8. Exporting Data to a CSV File Using Python's Web Scraping
- The process of gathering data from webpages is called web scraping. This blog post will go through how to use Python to extract data from a website and export it to a CSV file.
- The prerequisite libraries, Beautiful Soup and urllib, must first be imported. The URL of the website from which we wish to extract data is then specified.
pip install bs4
pip install urllib
- The urllib request library is then used to open the URL and read its contents. The appropriate data is then extracted from the HTML code using Beautiful Soup. In this illustration, we are pulling data from Flipkart regarding Samsung mobile devices.
from bs4 import BeautifulSoup as soup
from urllib.request import urlopen as uReq
my_url="https://www.flipkart.com/search?q=samsung+mobiles&sid=tyy%2C4io&as=on&as-show=on&otracker=AS_QueryStore_HistoryAutoSuggest_0_2&otracker1=AS_QueryStore_HistoryAutoSuggest_0_2&as-pos=0&as-type=HISTORY&as-searchtext=sa"
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
- Finding every container that has data about every mobile phone is where we start. Then, after iterating through each container, we retrieve the data pertaining to the item’s name, cost, and rating. In order to ensure that the data has been appropriately extracted, we print this information to the console.
containers = page_soup.findAll("div", { "class": "_3O0U0u"})
print(len(containers))
print(soup.prettify(containers[0]))
container = containers[0]
print(container.div.img["alt"])
price = container.findAll("div", {"class": "col col-5-12 _2o7WAb"})
print(price[0].text)
ratings = container.findAll("div", {"class": "niH0FQ"})
print(ratings[0].text)
- The data is then written to a CSV file that has been created. The first step is to open the file in write mode after providing the file name. The data for each cell phone is subsequently written to the file after the headers. Comma-separated values (CSV) format is used when writing the data to the file, making it simple to read and import into other programmes
filename = "products.csv"
f = open(filename, "w")
headers = "Product_Name, Pricing, Ratings \n"
f.write(headers)
for container in containers:
product_name = container.div.img["alt"]
price_container = container.findAll("div", {"class": "col col-5-12 _2o7WAb"})
price = price_container[0].text.strip()
rating_container = container.findAll("div", {"class": "niH0FQ"})
rating = rating_container[0].text
print("Product_Name:"+ product_name)
print("Price: " + price)
print("Ratings:" + rating)
- In conclusion, Python web scraping is an effective method for obtaining data from websites. We may examine and use the data for numerous reasons, like data visualisation, machine learning, and data analysis, by exporting it to a CSV file. Anyone can start extracting useful data from websites and gaining insights from it with the correct tools and strategies.
9. Web Scraping Instagram using Python with Instaloader
One of the most widely used social media sites is Instagram, and Python can be used to scrape a lot of its data. A strong Python module called Instaloader makes it simple to scrape Instagram data. This blog post will demonstrate how to use Instaloader to scrape information from Instagram.
- Initially, we used pip to install Instaloader. A new instance of the Instaloader class is created, and a profile is loaded using an Instagram handle.
# Import the module
!pip install instaloader
import instaloader
# Create an instance of Instaloader class
bot = instaloader.Instaloader()
# Load a profile from an Instagram handle
profile = instaloader.Profile.from_username(bot.context, 'aman.kharwal')
print(type(profile))
- A profile’s various information can be obtained, including the username, user ID, number of posts, followers, followees, and bio.
print("Username: ", profile.username)
print("User ID: ", profile.userid)
print("Number of Posts: ", profile.mediacount)
print("Followers: ", profile.followers)
print("Followees: ", profile.followees)
print("Bio: ", profile.biography,profile.external_url)
- Using our username and password in the script or an interactive login on the terminal, we can also access our accounts. All followers’ usernames are retrievable.
# Login with username and password in the script
bot.login(user="your username",passwd="your password")
# Interactive login on terminal
bot.interactive_login("your username") # Asks for password in the terminal
# Retrieve the usernames of all followers
followers = [follower.username for follower in profile.get_followers()]
# Retrieve the usernames of all followees
followees = [followee.username for followee in profile.get_followees()]
print(followers)
- We load a new profile and retrieve all posts in a generator object in the following step. By utilising the bot.download post() method, we can loop through the posts and download them to our system.
# Load a new profile
profile = instaloader.Profile.from_username(bot.context, 'wwe')
# Get all posts in a generator object
posts = profile.get_posts()
# Iterate and download
for index, post in enumerate(posts, 1):
bot.download_post(post, target=f"{profile.username}_{index}")
- In this way it offers a quick and effective approach to gather data from Instagram posts and profiles. This allows us to perform sentiment analysis on Instagram data, develop recommendation systems, and examine the data.
10. Building a Simple Chatbot with Python's NLTK library
- Businesses are utilising chatbots, which are becoming more and more popular, to improve the consumer experience. The Natural Language Toolkit (NLTK) module for Python will be used to demonstrate how to create a straightforward chatbot in this blog.
from nltk.chat.util import Chat, reflections
- To give the chatbot predefined responses, we will compile a list of patterns and answers. The list of patterns and responses and the reflections dictionary will then be passed to a newly created instance of the Chat class.
#Pairs is a list of patterns and responses.
pairs = [
[
r"(.*)my name is (.*)",
["Hello %2, How are you today ?",]
],
[
r"(.*)help(.*) ",
["I can help you ",]
],
[
r"(.*) your name ?",
["My name is thecleverprogrammer, but you can just call me robot and I'm a chatbot .",]
],
[
r"how are you (.*) ?",
["I'm doing very well", "i am great !"]
],
[
r"sorry (.*)",
["Its alright","Its OK, never mind that",]
],
[
r"i'm (.*) (good|well|okay|ok)",
["Nice to hear that","Alright, great !",]
],
[
r"(hi|hey|hello|hola|holla)(.*)",
["Hello", "Hey there",]
],
[
r"what (.*) want ?",
["Make me an offer I can't refuse",]
],
[
r"(.*)created(.*)",
["Aman Kharwal created me using Python's NLTK library ","top secret ;)",]
],
[
r"(.*) (location|city) ?",
['New Delhi, India',]
],
[
r"(.*)raining in (.*)",
["No rain in the past 4 days here in %2","In %2 there is a 50% chance of rain",]
],
[
r"how (.*) health (.*)",
["Health is very important, but I am a computer, so I don't need to worry about my health ",]
],
[
r"(.*)(sports|game|sport)(.*)",
["I'm a very big fan of Cricket",]
],
[
r"who (.*) (Cricketer|Batsman)?",
["Virat Kohli"]
],
[
r"quit",
["Bye for now. See you soon :) ","It was nice talking to you. See you soon :)"]
],
[
r"(.*)",
['That is nice to hear']
],
]
- Calling the converse method of the Chat instance will initiate a conversation once the chatbot has been configured. With questions and answers based on
print(reflections)
#Output
{'i am': 'you are',
'i was': 'you were',
'i': 'you',
"i'm": 'you are',
"i'd": 'you would',
"i've": 'you have',
"i'll": 'you will',
'my': 'your',
'you are': 'I am',
'you were': 'I was',
"you've": 'I have',
"you'll": 'I will',
'your': 'my',
'yours': 'mine',
'you': 'me',
'me': 'you'}
- Overall, creating a basic chatbot with NLTK is an easy procedure that only requires a small amount of code. Learning how to design a chatbot can be a useful skill for developers given the increasing need for chatbots across a range of sectors.
my_dummy_reflections= {
"go" : "gone",
"hello" : "hey there"
}
#default message at the start of chat
print("Hi, I'm thecleverprogrammer and I like to chat\nPlease type lowercase English language to start a conversation. Type quit to leave ")
#Create Chat Bot
chat = Chat(pairs, reflections)
#Start conversation
chat.converse()
Building a chatbot with Python’s NLTK library is simple and enjoyable, and chatbots are a great way to improve customer experience. The options for chatbot functionality are unlimited thanks to the ability to modify responses and patterns. Thus, try developing a chatbot and discover how it might enhance the user experience.