def fetch_batch(it): session = requests.Session() for url in it: yield session.get(url).text session.close()
import requests
### 🎯 Your Next Step
# 3️⃣ Keep only unique words distinct_words = words.distinct() spark 2 workbook answers
val result = df .groupBy($"department") .agg(count("*").as("emp_cnt"), avg($"salary").as("avg_salary")) .filter($"emp_cnt" > 5)
## 7. Putting It All Together – A Mini‑Project Blueprint
- [ ] All code compiles/run on Spark 2.x (no 3.x‑only APIs). - [ ] Comments are present for every non‑obvious line. - [ ] You’ve referenced at least **one** Spark concept (lazy eval, shuffle, broadcast, etc.). - [ ] Edge cases are discussed. - [ ] The answer is written **in your own words** (no copy‑pasting from the internet). def fetch_batch(it): session = requests
| Operation | PySpark | Scala | |-----------|---------|-------| | **Read CSV** | `spark.read.option("header","true").csv(path)` | `spark.read.option("header","true").csv(path)` | | **Write Parquet** | `df.write.parquet("out.parquet")` | `df.write.parquet("out.parquet")` | | **Cache** | `df.cache()` | `df.cache()` | | **Repartition** | `df.repartition(10)` | `df.repartition(10)` | | **Window** | `from pyspark.sql.window import Window` | `import org.apache.spark.sql.expressions.Window` | | **UDF** | `spark.udf.register("toUpper", lambda s: s.upper(), StringType())` | `udf((s: String) => s.toUpperCase, StringType)` | | **Streaming read** | `spark.readStream.format("socket")...` | `spark.readStream.format("socket")...` | | **Stop Spark** | `spark.stop()` | `spark.stop()` |
## 8. Final Checklist Before Submitting
---
## 6. Quick Reference Cheatsheet (Spark 2.4)
---