✨Special Announcement: We've Joined New Relic Serverless!

Get ready to function faster with full visibility into your serverless applications—and everywhere else. Read our founders' note.

Security and HA Practices

We at IOpipe understand that securing our product, our software, and protecting our user's data is a critical requirement. To this end, we design with a security-first mindset. Everywhere inside IOPipe, we use a continuous integration pipeline that allows us to:

  • respond to the latest security threats
  • patch these threats
  • release updated code to ensure that our product is protected and hardened against them

We use this CI pipeline, coupled with the latest in DevOps best practices, to provide a robust and secure platform that our users feel comfortable using as production infrastructure.

AWS Advanced Tier Partner

As part of being an AWS Advanced Tier partner, IOpipe undergoes a regular architecture review with AWS to ensure we comply with AWS Well-Architected best practices around reliability and security.

IOpipe Data Security

Encryption Everywhere

We use TLS encryption of every portion of the communication stream from our end users agent to, which communicate to our API, and to the endpoint of ingestion into our database.

Database Encryption/Encryption at Rest

All of our data, which resides in AWS Kinesis, AWS RDS, and AWS Elasticsearch, is encrypted at rest. We leverage the AWS KMS system to provide encryption keys for our active data and our archived data.

Controlled Data Access

At IOpipe we restrict access to our data stores to internal employees, retain CloudTrail audit logs, and use two-factor authentication on all internal services.

Data Retention

IOpipe retains customer function invocation records for 30 days. For our enterprise plans, we have the flexibility to offer longer data retention times as needed.

IOpipe Agent Security

The IOpipe agent libraries are developed for multiple languages. The agent itself is able to take and gather data, metrics, and information from the underlying Lambda container and the Lambda function itself. Combined with the profiler plugin, our agent is able to pull out the most intimate details of Lambda code execution.

The agent submits this metadata to collection endpoints via HTTPS (TLS), but has the ability to operate behind a VPC Nat Gateway for added security.

The data received by the IOpipe Collector API endpoint is validated, authorized via JWT tokens, and, once verified, enters our AWS Kinesis streams for ingestion.

The IOpipe agent gives users the ability to instrument custom metrics and custom tracing measures in their code. We understand a customer may put sensitive information into these areas of the code. This data is stored encrypted at rest, transported via TLS, and authenticated via signed requests.

Users enabling the optional profiling feature will have metadata about their application code stored on an S3 bucket controlled by IOpipe. This data is stored encrypted at rest and may only be accessed by keys specific to an associated invocation record.

IOpipe Dashboard Security

The IOpipe dashboard is a multi-region S3 website that sits behind SSL-encrypted CloudFront distributions operating in all regions.

For authentication, we leverage multiple sources and integrations with Auth0. We allow for two-factor authentication, Github sign-in, and individual accounts, all protected and authenticated through Auth0.

Once a user is logged in and authenticated we leverage AWS infrastructure and our own code path to log what users do in their dashboards, who is logged in, and where they logged in. We do this to ensure an audit trail in the case that any data is compromised or a compromised login occurs into your dashboard.

Data Availability

Our platform must be highly available as we are integrated at mission critical points inside users’ applications. With this in mind, we have built the entire IOpipe infrastructure to be highly available.

We leverage a multi-region, multi-availability zone approach inside AWS. We operate our collection endpoints in 9 regions, to keep data ingestion as close to where the Lambda runs as possible, to minimize latency, and to provide local data retention in case of communication failures.

Each region’s collection API endpoint has its own Kinesis stream. The Kinesis stream offers data security and resiliency before it is written into the database. We use 7-day Kinesis data retention, so that in the event of the catastrophic failure of a database, we can replay the data to be ingested.

Once the data has been authenticated in it’s local region, and written to the Kinesis stream, worker Lambdas take batched data from the stream and write it to our core Elasticsearch cluster. This is done after the initial agent call; to prevent any latency from impacting performance.

Our core databases, RDS and Elasticsearch, run in a multi-AZ scenario to accommodate any AZ downtimes that may occur.

For RDS, we utilize a failover method in case of an outage and have multiple read replicas to make sure the data is highly available.

For Elasticsearch, we utilize a shard allocation pattern that makes sure at least 1 copy of every replica shard is in every AZ so that data can be readily accessed.

Summary

We at IOpipe believe in being transparent with our architecture, development and security practices. We firmly believe the security and availability of our users data is critical to our users. To that end, we follow industry best practices and seek to exceed those standards where possible.