You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Web scraper for collecting product and review data from e-commerce sites using Scraping Bee, AWS, Selenium, and Pandas. Focuses on cost-effective solutions, user-friendly interfaces, and efficient data extraction and analysis.
This lab involves creating a Flask web app hosted on AWS EC2. It features user registration, login functionality, and data storage using SQLite. The app supports file uploads, user form submissions, and authentication. Apache with mod_wsgi is used to deploy the app, and SQL queries manage interactions with the database.
In this project, I analyzed weather data from the NCEI Global Surface Summary of Day dataset using PySpark in Jupyter Notebook. Tasks included data cleaning, statistical analysis, and forecasting for temperature, wind speed, precipitation, and extreme weather events. The project also predicts future weather patterns for Cincinnati and Florida.
This project is a comprehensive online shopping platform featuring a Django backend and a React frontend, designed to manage products, orders, and customer information efficiently. It includes functionalities for shopping cart management, order processing, and integration with PostgreSQL for data storage.
This project aims to create a predictive model that forecasts the likelihood of a patient being readmitted to the hospital within 30 days of discharge.
The bulletin board project is a real-time messaging application developed using Python's socket programming, enabling users to post and view messages on a virtual bulletin board. It features a server-client architecture that supports concurrent connections, allowing for seamless communication and message updates among users.
This Dockerized Python application analyzes two text files (IF.txt and AlwaysRememberUsThisWay.txt). It counts total words, identifies the largest file, and finds the top three most frequent words in each. Results are saved to an output file and printed to the console.