We will map a local port, to the remote port RDS listens to for connections, and connect to RDS through the Webserver that hosts your application, and already has access to RDS. This is a template of the command: ssh -N -L localPort:rdsHost:remotePort -i ~/path/to/keyĮxplanation -N do not execute a remote command (useful for forwarding ports) -L forward localPort to remotePort localPort the port your local database connects to. You can set this to any available port such as 1234, 3306 and so on. rdsHost your RDS endpoint (url) remotePort the port your remote database listens to for connections. For PostgreSQL databases, the default is 5432. These are the credentials you use to ssh into your web server (ec2) -i identity (key file)Įxample ssh -N -L :3306 -i ~/.ssh/AwesomeServerKey.pem the username and the remote instance your tunnel will connect to the database through. Running this command “opens” the ssh tunnel, which I can now use. For convenience, I’d recommend setting up an alias for this command. To connect with SequelPro, specify the localPort from earlier (1234), and connect through localhost (127.0.0.1) using your RDS username and password. The AWS technologies used in production are: I followed his tutorial to have a successful first basic deployment. He also provided sage advice when I got stuck in a few places. Essentially, I catered his project to my specific needs, and added the ability to connect to a PostgreSQL database. I participate in a "fitness challenge" where players log daily points. The format of the collected data is not ideal, so I aim to clean this data, store it indefinitely on AWS in RDS, and make it available via FastAPI so that others can use the data for analysis. crud.py: specifies crud (create, read, update, delete) actions.database.py: sets up connection with PostgreSQL.schemas.py: pydantic models specified, which I believe dictates the output format when API is called.routers/: folder containing subsets of routes.Pre-commit.yaml: config file for pre-commit tool requirements.txt: requirements to install when project is built using sam.template.yml: essentially the recipe for deploying the project to AWS.Setup (linux) Install and configure AWS CLI and SAM In order to proceed with set-up and deployment, AWS CLI and SAM need to be installed and configured on your machine. Select AWS service as type, and choose Lambda and use case.AWSLambdaBasicExecutionRole: Permission to upload logs to CloudWatch.AWSLambdaVPCAccessExecutionRole: Permission to connect our Lambda function to a VPC.Finish creating role, and set name as fastapilambdarole.This name matches role specified in template.yml. When we deploy our code with AWS SAM, a zip folder of our code will be uploaded to S3. create-bucket-configuration LocationConstraint= eu-central-1 (2) With the AWS CLI aws s3api create-bucket \ There are two options for creating an S3 bucket. Please note that S3 bucket names need to be globally unique. So the name of the bucket you create here will determine the bucket name used in later steps. AWS ACCESS POSTGRES USING PSEQUEL INSTALLĬlone project and test locally git clone cd fastapi-postgres-aws-lambda # create and activate a virtual environment pip install -r requirements.txt pip install uvicorn Also, ensure that you change the region to your local region. # click the link to open the browser at http: //127.0.0.1:8000 Start FastAPI uvicorn app.main: app -reload Postgres=# \ copy fit(player, team, season, data_date, points) from 'clean_fit.csv' with DELIMITER ',' CSV HEADER Postgres=# CREATE TABLE fit (id serial, player varchar( 50), team varchar( 50), season varchar( 50), data_date date, points float) In order to test locally without errors, PostgreSQL needs to be installed on your local machine, and the sample data needs to be loaded into a database table. AWS ACCESS POSTGRES USING PSEQUEL INSTALL.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |