Pandas is a very powerful and versatile Python data analysis library that expedites the preprocessing steps of your project. Pandas is not only used for preprocessing but also exploring the data. You can and should push the limits to see what pandas is capable of. In this post, I focus on how to filter DataFrames but more importantly, I will try to convey the message:
The more you use Pandas in your data analysis, the more useful it gets.

As always, we start with importing numpy and Pandas:
import pandas as pd
import numpy as np
I will use a part of telco customer churn dataset available on Kaggle.
df.head()

The good thing about Pandas is that there are multiple ways to accomplish a task. Pandas core data structure, DataFrame, consists of labeled rows and columns. Pandas is very flexible in terms of what can be done on rows and columns which makes it very easy to play around what is stored in the DataFrame: data.
Try to push the limits and you will be surprised what Pandas is capable of.
In this section, I will write down some possible questions you may encounter while working on data. Then show a way to find the answer using Pandas. Please keep in mind that there is almost always other ways to do the same thing. So feel free to find the answer in a different way. I think it will be a good practice.
Note: I will also put the code in text in case you want to modify or try different things easily.
- What is the internet service type distribution for customers who have been a customer for less than 10 months?
df[df.tenure < 10]['InternetService'].value_counts()

Tenure variable shows for how long the customer has been holding a contract in months.
- What is the average monthly charges for customers who pay with electronic check?
df[df.PaymentMethod == 'Electronic check']['MonthlyCharges'].mean()

- How many customers are paying with electronic check or mailed check and what is the average monthly charges of those customers?
method = ['Electronic check', 'Mailed check']
df[df.PaymentMethod.isin(method)].groupby('PaymentMethod').agg(['count','mean'])['MonthlyCharges']

- What is the churn rate for customers who stayed for less than three months and use DSL internet service?
Note: We need to first convert the values in Churn column to numeric values to be able to compute aggregations. ‘Yes’ will be converted to 1 and ‘No’ will be converted to 0. Then we can apply aggregate functions.
churn_numeric = {'Yes':1, 'No':0}
df.Churn.replace(churn_numeric, inplace=True)
df[(df.tenure < 3) & (df.InternetService == 'DSL')].Churn.mean()

- There was a problem with the DSL contracts 7 months ago which lasted for three months. I want to see the effect of this problem on churn rate. What is the churn rate for customer whose contracts were not signed in this period?
df[(~df.tenure.isin([7,6,5])) & (df.InternetService == 'DSL')]['Churn'].mean()

Note: Tilde (~) operator is used as NOT.
- I want to group customers into 3 categories based on how long they have been a customer and find the average churn rate for each group.
df['tenure_cat']= pd.qcut(df.tenure, 3, labels=['short','medium','long'])
df[['tenure_cat','Churn']].groupby('tenure_cat').mean()

- I want to get an overview of how churn rate changes based on gender, contract type and being a senior.
df.pivot_table(index=['gender','SeniorCitizen'], columns='Contract', values='Churn', aggfunc='mean', margins=True)

- What is the internet service distribution for customers who belong to top three tenure values in terms of number of customers in that tenure month?
df[df.tenure.isin(df.tenure.value_counts().index[:3])]['InternetService'].value_counts()
OR
df[df.tenure.isin(df.tenure.value_counts().nlargest(3).index)]['InternetService'].value_counts()

This is just a small piece of what you can do with Pandas. The more you work with it, the more useful and practical ways you find. I suggest approaching a problem using different ways and never place a limit in your mind.
Practice makes perfect.
When you are trying hard to find a solution for problem, you will almost always learn more than the solution of the problem at hand. You will improve your skillset step-by-step to build a robust and efficient data analysis process.
Thanks for reading. Please let me know if you have any feedback.