click below
click below
Normal Size Small Size show me how
DD R2
| Question | Answer |
|---|---|
| What happens when there are 10,000 campaigns | Use pagination or virtualization so only visible rows render. |
| How do you virtualize tables | Render only what’s on screen using a windowing library. |
| How do you debounce filters and why | Delay the filter action until the user stops typing for ~300ms. to prevent performance issues server reloads |
| How do you show loading vs skeletons | Skeletons for layout stability, spinners for full-page loads. |
| How do you prevent flicker | Keep previous data on screen while new data loads. |
| How do you support mobile | Responsive layouts and touch-friendly controls. |
| How do you make this accessible | Keyboard navigation, labels, and ARIA for dynamic content. |
| How do you handle broken data | Fallback UI with clear error messages and retry. |
| How do you let users compare campaigns | Multi-select rows or side-by-side charts. |
| What metrics do you log from this UI | Views, filter usage, clicks, exports, and campaign actions. |
| What is an ad impression | An impression is counted when an ad is shown to a user. |
| What is a click | A click is when a user taps or clicks an ad. |
| What is CTR | Click-through rate = clicks ÷ impressions, measuring how engaging an ad is. |
| What is a conversion | A conversion is a desired outcome after clicking, like a purchase or sign-up. |
| What is ROAS | Return on ad spend = revenue ÷ ad spend, showing ad profitability. |
| What is CPC vs CPA | CPC is cost per click; CPA is cost per conversion. |
| What does “budget pacing” mean | How fast a campaign is spending its budget over time. |
| What does “bid” mean | The amount an advertiser is willing to pay per click or impression. |
| What is an ad auction | A real-time system that chooses which ad to show based on bids and relevance. |
| What is creative rotation | Automatically showing different ad creatives to test performance. |
| How do we avoid double counting | Use unique event IDs and dedupe during aggregation. |
| How do we handle delayed events | Allow late data and backfill metrics when it arrives. |
| What happens if tracking is broken | Detect drops in event volume and flag missing data. |
| How do we validate metrics | Compare raw logs to aggregates and monitor ratios. |
| How do you backfill missing data | Replay historical logs to recompute aggregates. |
| How do you detect fraud | Look for abnormal click rates, IPs, or behavior patterns. |
| What happens if impressions > requests | That signals logging or aggregation errors. |
| How do you handle partial failures | Show partial data with warnings instead of blocking the UI. |
| What does data freshness mean | How up-to-date the metrics are. |
| How do you reconcile logs vs aggregates | Use logs as ground truth and debug mismatches. |
| How would you A/B test a new ad format | Split users into control and test and compare key metrics. |
| What is statistical significance | The likelihood that results aren’t due to random chance. |
| What is a control group | Users who see the existing version. |
| What metrics matter for success | CTR, conversions, revenue, and guardrails like user experience. |
| How do you avoid misleading results | Use enough data and avoid cherry-picking time windows. |
| What if users switch buckets | Use consistent assignment to prevent contamination. |
| How long should tests run | Long enough to reach statistical significance. |
| How do you interpret noisy data | Look for trends and confidence intervals, not just point values. |
| How do you present results to PMs | Show uplift, confidence, and business impact. |
| What would you show advertisers | Clear performance metrics like spend, clicks, and ROAS. |
| Question to ask before solving | Who is the user and what decision are they making? Is the data real-time, near-real-time, or batched daily? How accurate is the data and how often does it change? What scale are we talking about? |