From the course: Advanced Data Engineering with Snowflake
What this course will cover - Snowflake Tutorial
From the course: Advanced Data Engineering with Snowflake
What this course will cover
Much of what we'll do in this course will expand on the knowledge we covered in the first course. In the first course, we started from scratch and we contextualized how to build a data pipeline using the ITD framework: ingestion, transformation, and delivery. We also covered orchestration which helped you add automation to the pipeline. Starting from scratch allowed us to build a pipeline together piece by piece so that you can put the ITD framework to action. In this course, we'll do things a little differently. We won't build a new pipeline from scratch. Instead, we'll start with a prebuilt pipeline and expand it by implementing the concepts covered in this course. We'll cover two topics, DevOps with Snowflake and Observability for data pipelines. By the end of the course, you'll know how to take your pipelines to the next level by knowing how to use DevOps to streamline and automate the development of your pipelines, how to implement observability to keep an eye on your pipeline's health, and how to capture and ingest streaming data into Snowflake in near real time. Just like in the first course, we're going to focus on the practical over the theoretical. Whenever necessary, we'll provide any background information that you'll need to understand a concept, but generally we're going to focus on the what and the how behind each concept and feature. This will help you to quickly understand new concepts, enable you to start using them right away in Snowflake, and help you feel confident in what you're doing. To get you to that outcome, we've designed this course to be hands-on. In fact, this course won't be a passive learning experience. For most exercises in the course, you're expected to code along with me to successfully build something together. This is so that you can learn by doing and get some hands-on experience with the concepts that we'll cover. We'll actively use tools like the command line, GitHub, SQL, Python, and more. And as in the previous course, we won't be exhaustive of all advanced data engineering features, techniques, or pipeline architectures. That's intentional. But what you'll learn in this course will make you dangerously good at building more complex and efficient pipelines with Snowflake. And in addition, you'll learn enough to feel comfortable exploring concepts and features that we might not cover in this course. Now join me in the next video to learn more about DevOps and data engineering.