Design a Facebook-Style News Feed for Android
A realistic interview walkthrough. We start from requirements, build up the architecture layer by layer, write production code, and handle the follow-up curveballs interviewers love to throw.
Every section and question is tagged with a difficulty badge so you know where to focus your prep time:
Let's design the Facebook News Feed for Android. I want you to implement pagination from scratch — do not use Jetpack Paging 3 library. I want to see that you understand the underlying mechanics. Where would you like to start?
Before jumping into design, I'd like to confirm the functional requirements first, then the non-functional requirements. That way we have a shared understanding of scope.
Functional Requirements (what the app does):
- Display a scrollable feed of posts (text, images, mixed)
- Each post shows author info, content, timestamp, and engagement counts (likes, comments, shares)
- Pull-to-refresh to get the latest posts
- Infinite scroll with pagination — no "Load More" button
- Search posts by keyword
- Filter posts by type (photos only, text only, viral)
- Create a new post
Non-Functional Requirements (how the app behaves):
- Offline support — cached feed should be available without network
- Survive configuration changes (screen rotation) without data loss
- Smooth scrolling at 60fps even with heavy image content
- Testable architecture — unit tests for business logic without Android framework
- Scalable to millions of posts without client-side performance degradation
Does this match what you had in mind, or should I adjust the scope?
That's a solid list. Let's go with this. How would you approach the high-level architecture?
I'd go with MVVM (Model-View-ViewModel) with a Repository pattern. Here's the data flow — from top to bottom:
The key idea: Room is the single source of truth. The UI never talks to the network directly. It observes Room via Kotlin Flows. When we need fresh data, we hit the API and write results into Room — the Flow automatically emits the update.
This gives us offline support for free. If the network fails, the user still sees cached data.
What the feed looks like on device
- MVP requires manual cleanup — the Presenter is tied to the View via an interface, and you must manually null it out to avoid leaks.
- MVVM eliminates this pattern — the ViewModel exposes a
StateFlowand has zero reference to the View. - View lifecycle is automatic — the View subscribes when alive, stops when destroyed, with no leak, no null checks, and no
view?.updateUI()crashes on detached Activities.
- Platform-agnostic vs Android-only: Flow is part of
kotlinx.coroutines— works on Android, server-side Kotlin, desktop, and Kotlin Multiplatform. LiveData is tied to the Android framework. Your domain and data layers shouldn't need Android imports. - Rich operators: Flow gives you
debounce,flatMapLatest,combine,distinctUntilChanged,retry,zip, and more out of the box. With LiveData, any non-trivial transformation turns into MediatorLiveData spaghetti. - Coroutine-native: Flow integrates directly with structured concurrency — cancellation, exception handling, and dispatchers just work. LiveData's coroutine support feels bolted on via
liveData { }builder. - Cold by default: Flow doesn't produce values until collected, saving resources. LiveData is always active when observed, even if the consumer isn't ready.
- Jetpack has moved on: Room returns
Flow<T>natively, DataStore uses Flow, Paging 3 emitsFlow<PagingData>. WithcollectAsStateWithLifecycle()in Compose, we get lifecycle safety without LiveData.
- Offset pagination breaks with dynamic data — if a new post is inserted at the top between requests, page 3 will now have a duplicate from page 2.
- Cursor pagination is stable — it uses an opaque token so the server resumes from a fixed position regardless of insertions or deletions.
- For live feeds, cursor is essential — with constant new content, cursor-based pagination is the only approach that works reliably.
- Room is the single source of truth (SSOT) — the UI observes Room, never the API directly.
- Network fetches update Room — the API writes to Room, and Room's Flow automatically emits to the UI.
- Offline support is free — Room persists data locally, so the user sees cached content when the network is gone.
- No stale-data bugs — with one source instead of two, there's no conflicting state between the UI and the API.
- Multi-screen consistency — both screens read from the same table, eliminating sync issues.
- XML is imperative and error-prone — FindViewById/ViewBinding + manual updates make it easy to forget updating a view and create bugs.
- Compose is declarative — describe the UI for a given state, and the framework automatically diffs and updates.
- Eliminates boilerplate —
LazyColumnreplaces RecyclerView + Adapter + ViewHolder + DiffUtil in one composable. - Lifecycle handling is trivial —
collectAsStateWithLifecyclehandles lifecycle collection in one line. - Huge productivity gain — for a greenfield screen, Compose eliminates approximately 50% of boilerplate code.
Good. Let's see some code. Walk me through the data layer first.
Starting with the Room entity and DAO, then Retrofit, then the Repository that ties them together.
@Entity(tableName = "posts")
data class PostEntity(
@PrimaryKey val id: String,
val authorName: String,
val authorAvatar: String,
val content: String,
val imageUrl: String?,
val likeCount: Int,
val commentCount: Int,
val shareCount: Int,
val createdAt: Long,
val cursor: String,
val trendingScore: Double = 0.0
)
@Dao
interface PostDao {
@Query("SELECT * FROM posts ORDER BY createdAt DESC")
fun observeAll(): Flow<List<PostEntity>>
@Insert(onConflict = OnConflictStrategy.REPLACE)
suspend fun insertAll(posts: List<PostEntity>)
@Query("SELECT cursor FROM posts ORDER BY createdAt ASC LIMIT 1")
suspend fun oldestCursor(): String?
@Query("DELETE FROM posts")
suspend fun clearAll()
}
- observeAll() returns Flow — it does not execute a query immediately. It sets up a reactive stream. The actual query runs only when someone calls
.collect(). Since no work happens at call-time, there's nothing to suspend. - insertAll(), oldestCursor(), clearAll() do immediate I/O — they hit the SQLite database right now, which is a blocking disk operation. Marking them
suspendmeans Room runs them on a background dispatcher automatically. - Rule of thumb — if a Room DAO method returns
Flow<T>orLiveData<T>, it's a regularfun. If it returns a plain value orUnit, it must besuspend fun(or you get a compile error if you call it on the main thread).
What does the API contract look like? Walk me through the request and response for cursor-based pagination.
The API uses cursor-based pagination. The client sends an opaque cursor string (the ID or timestamp of the last item it saw) and a page size. The server returns the next page of items plus the cursor for the following page.
Here's the contract:
// ---- REQUEST ---- // GET /v1/feed?after={cursor}&limit={pageSize} // // after → opaque cursor string (null for first page) // limit → number of items per page (default 20) // // ---- RESPONSE ---- // { // "posts": [ { "id": "abc", "author": "...", ... }, ... ], // "nextCursor": "eyJ0IjoxNjg5MjM...", // opaque, base64 // "hasMore": true // }
interface FeedApiService {
@GET("v1/feed")
suspend fun getFeed(
@Query("after") cursor: String? = null,
@Query("limit") limit: Int = 20
): FeedResponse
@POST("v1/posts/{postId}/like")
suspend fun likePost(@Path("postId") postId: String)
@DELETE("v1/posts/{postId}/like")
suspend fun unlikePost(@Path("postId") postId: String)
@POST("v1/posts")
suspend fun createPost(@Body request: CreatePostRequest): PostDto
}
data class FeedResponse(
val posts: List<PostDto>,
val nextCursor: String?,
val hasMore: Boolean
)
- Server flexibility — the server can change the cursor encoding anytime (switch from timestamp to composite key) without breaking clients.
- No skipped/duplicate items — offset pagination breaks when items are inserted or deleted between pages. Cursor always points to the exact last item seen.
- Prevents abuse — clients can't guess or manipulate the cursor to skip pages or access arbitrary data.
- Retrofit + coroutines — when a Retrofit interface method is marked
suspend, Retrofit automatically executes it on a background thread and suspends the calling coroutine until the response arrives. - No callback hell — without
suspend, you'd needCall<T>withenqueue()callbacks or RxJavaObservable. - Structured concurrency — if the ViewModel's scope is cancelled (user navigates away), in-flight API calls are cancelled automatically.
Now the Repository — the bridge that makes Room the source of truth:
class FeedRepositoryImpl @Inject constructor(
private val api: FeedApiService,
private val dao: PostDao
) : FeedRepository {
override fun observeFeed(): Flow<List<Post>> =
dao.observeAll() // Room returns Flow — UI collects this
override suspend fun loadNextPage(): Boolean {
val cursor = dao.oldestCursor()
val response = api.getFeed(cursor)
dao.insertAll(response.posts)
return response.hasMore
}
override suspend fun refresh() {
dao.clearAll()
loadNextPage()
}
}
How does the ViewModel expose state to the UI? And how does rotation work?
One data class represents the entire screen state. One StateFlow emits it. The ViewModel survives rotation because the ViewModelStore is scoped to the Activity's lifecycle, not the Fragment or Compose destination.
sealed interface FeedUiState {
data object Loading : FeedUiState
data class Success(
val posts: List<Post>,
val isLoadingMore: Boolean = false,
val hasMore: Boolean = true
) : FeedUiState
data class Error(val message: String) : FeedUiState
}
@HiltViewModel
class FeedViewModel @Inject constructor(
private val repository: FeedRepository
) : ViewModel() {
private val _uiState = MutableStateFlow<FeedUiState>(FeedUiState.Loading)
val uiState = _uiState.asStateFlow()
init {
viewModelScope.launch {
repository.observeFeed().collect { posts ->
_uiState.value = FeedUiState.Success(posts = posts)
}
}
refresh()
}
fun refresh() = viewModelScope.launch {
try {
repository.refresh()
} catch (e: Exception) {
_uiState.value = FeedUiState.Error(e.message ?: "Something went wrong")
}
}
fun onScrolledNearEnd() {
val current = _uiState.value as? FeedUiState.Success ?: return
if (current.isLoadingMore || !current.hasMore) return
viewModelScope.launch {
_uiState.value = current.copy(isLoadingMore = true)
val more = try { repository.loadNextPage() } catch (_: Exception) { false }
val latest = _uiState.value as? FeedUiState.Success ?: return@launch
_uiState.value = latest.copy(isLoadingMore = false, hasMore = more)
}
}
}
On rotation: ViewModel stays alive. StateFlow keeps the latest emission cached. When Compose resubscribes after recreation, it gets the current state immediately. Zero data loss, zero re-fetching.
- Cold flow = no work until collected — Room's Flow is cold, so no query runs when nobody is listening.
- Hot flow = always has a value — StateFlow is hot and always holds the latest value.
- StateFlow survives rotation — new collectors get the latest state instantly without re-fetching.
- Design pattern: cold at data layer, hot at presentation — this gives us efficiency from Room and responsiveness from StateFlow.
- ViewModel bridges them — the
initblock collects the cold Room flow once and pushes into the hot StateFlow.
Look at these two lines:
private val _uiState = MutableStateFlow(FeedUiState())
val uiState: StateFlow<FeedUiState> = _uiState.asStateFlow()
- Why private? —
MutableStateFlowhas a.valuesetter. If you expose it publicly, any class (Fragment, another ViewModel, utility) can doviewModel._uiState.value = whatever, breaking unidirectional data flow. The ViewModel loses exclusive ownership of state, anyone can mutate it from anywhere, and you lose all guarantees about when and why state changes. - .asStateFlow() wraps in read-only interface — it returns a
StateFlowinterface with a.valuegetter but no setter, preventing mutation even if someone tries to cast it. - asStateFlow() returns a new wrapper, not the original — it is a compile-time guarantee that consumers can only read, not write to the original reference.
- Skipping asStateFlow() is unsafe — if you just type
val uiState: StateFlow = _uiState, a smart consumer can downcast it:(viewModel.uiState as MutableStateFlow).value = hacked. - Kotlin convention: backing property — the underscore prefix
_uiStateindicates the private mutable version. The publicuiState(no underscore) is the read-only view. This pattern appears in virtually every production ViewModel.
- Injecting concrete class violates Dependency Inversion (D in SOLID) — high-level modules should depend on abstractions, not low-level details.
- With the interface, tests become easy — pass in a
FakeFeedRepositorywith no Room, no Retrofit, running on the JVM in milliseconds. - Swap implementations without touching ViewModel — you can replace Retrofit with Ktor or any other client without changing the ViewModel.
- Decouple compile dependencies — the ViewModel module doesn't even need Room/Retrofit as compile dependencies, only the implementation does.
@Module
@InstallIn(SingletonComponent::class)
abstract class RepositoryModule {
@Binds
abstract fun bindFeedRepository(
impl: FeedRepositoryImpl
): FeedRepository
}
I notice the ViewModel is calling repository methods directly. Would you keep it this way?
For a small feature, calling the repository directly is fine. But this is a feed app that will grow — we will add post creation, liking, sharing, reporting, bookmarking, search with analytics tracking, and more. If all that logic sits in the ViewModel, it becomes a god class.
I would introduce UseCases (also called Interactors). Each UseCase encapsulates one piece of business logic. The ViewModel becomes a thin orchestrator that just wires UseCases to UI state.
Benefits:
- Single Responsibility: Each UseCase does exactly one thing.
ObserveFeedUseCaseobserves the feed.RefreshFeedUseCaserefreshes it.LoadNextPageUseCasepaginates. - Reusability: If a notification screen also needs to refresh the feed, it can reuse
RefreshFeedUseCasewithout duplicating the logic. - Testability: You test each UseCase in isolation. The ViewModel test just verifies it calls the right UseCase at the right time.
- Readability: The ViewModel constructor tells you everything the screen does at a glance — just read the UseCase names.
class ObserveFeedUseCase @Inject constructor(
private val repository: FeedRepository
) {
operator fun invoke(): Flow<List<Post>> =
repository.observeFeed()
}
class RefreshFeedUseCase @Inject constructor(
private val repository: FeedRepository
) {
suspend operator fun invoke() = repository.refresh()
}
class LoadNextPageUseCase @Inject constructor(
private val repository: FeedRepository
) {
suspend operator fun invoke(): Boolean = repository.loadNextPage()
}
Now the ViewModel becomes much cleaner — it just wires UseCases to state:
@HiltViewModel
class FeedViewModel @Inject constructor(
private val observeFeed: ObserveFeedUseCase,
private val refreshFeed: RefreshFeedUseCase,
private val loadNextPage: LoadNextPageUseCase
) : ViewModel() {
private val _uiState = MutableStateFlow(FeedUiState())
val uiState = _uiState.asStateFlow()
init {
viewModelScope.launch {
observeFeed().collect { posts ->
setState { copy(posts = posts) }
}
}
refresh()
}
fun refresh() = viewModelScope.launch {
setState { copy(isRefreshing = true) }
refreshFeed()
setState { copy(isRefreshing = false) }
}
fun onScrolledNearEnd() {
if (_uiState.value.isLoadingMore || !_uiState.value.hasMore) return
viewModelScope.launch {
setState { copy(isLoadingMore = true) }
val more = try { loadNextPage() } catch (_: Exception) { false }
setState { copy(isLoadingMore = false, hasMore = more) }
}
}
private fun setState(reduce: FeedUiState.() -> FeedUiState) = _uiState.update(reduce)
}
Notice: The constructor now reads like a feature list — observeFeed, refreshFeed, loadNextPage. Any engineer can open this file and immediately understand what the screen does without reading a single method body.
- Today it's simple, but tomorrow it will grow — the UseCase may merge the feed with promoted posts, filter blocked users, inject A/B sort logic, or add impression analytics.
- Cost of having it is minimal — one small class is negligible.
- Cost of NOT having it is refactoring pain — if you skip it and three features later share the same feed observation logic, you'll need to refactor the ViewModel.
Show me the Compose screen that ties this together.
@Composable
fun FeedScreen(viewModel: FeedViewModel = hiltViewModel()) {
val state by viewModel.uiState.collectAsStateWithLifecycle()
when (val uiState = state) {
is FeedUiState.Loading -> FullScreenLoader()
is FeedUiState.Error -> ErrorScreen(
message = uiState.message,
onRetry = viewModel::refresh
)
is FeedUiState.Success -> {
val listState = rememberLazyListState()
// Trigger pagination when near the end
LaunchedEffect(listState) {
snapshotFlow {
val last = listState.layoutInfo.visibleItemsInfo
.lastOrNull()?.index ?: 0
val total = listState.layoutInfo.totalItemsCount
last >= total - 3
}
.distinctUntilChanged()
.filter { it }
.collect { viewModel.onScrolledNearEnd() }
}
LazyColumn(state = listState) {
items(uiState.posts, key = { it.id }) { post ->
PostCard(post = post)
}
if (uiState.isLoadingMore) {
item { LoadingIndicator() }
}
}
}
}
}
Good. Now add search with debounce and filtering. Also, how would you find the top 5 trending posts efficiently? Think about the data structure.
For search, I'd debounce the input by 300ms using Flow.debounce() so we're not hammering the database on every keystroke. Combined with flatMapLatest, any in-flight search gets cancelled when a new query arrives.
For the "Top 5 trending" — this is a classic Top-K problem. Sorting the entire list would be O(N log N), but we can do it in O(N log K) using a min-heap (PriorityQueue) of size K. As we scan each post, if its score beats the heap minimum, we evict the min and insert the new one.
private val _searchQuery = MutableStateFlow("")
private val _activeFilter = MutableStateFlow(FeedFilter.All)
val filteredFeed: Flow<List<Post>> = combine(
repository.observeFeed(),
_searchQuery.debounce(300).distinctUntilChanged(),
_activeFilter
) { posts, query, filter ->
posts
.filter { it.content.contains(query, ignoreCase = true) }
.filter { post -> when (filter) {
FeedFilter.All -> true
FeedFilter.Photos -> post.imageUrl != null
FeedFilter.TextOnly -> post.imageUrl == null
FeedFilter.Viral -> post.likeCount > 1000
} }
}
fun topKTrending(posts: List<Post>, k: Int): List<Post> {
// Min-heap: smallest score at the top
val minHeap = PriorityQueue<Post>(k) { a, b ->
a.trendingScore.compareTo(b.trendingScore)
}
for (post in posts) {
if (minHeap.size < k) {
minHeap.add(post)
} else if (post.trendingScore > minHeap.peek().trendingScore) {
minHeap.poll()
minHeap.add(post)
}
}
return minHeap.sortedByDescending { it.trendingScore }
}
// O(N log K) time, O(K) space
// For K=5, N=10,000 this is roughly 2x faster than full sort
Try it: Search with Debounce + Filters
LIVECan you show me exactly how debounce and flatMapLatest work with Flows? What does flatMapLatest actually do?
debounce(300) waits for 300ms of silence before emitting. User types "a", "ab", "abc" rapidly — only "abc" gets emitted because the previous values were superseded within 300ms.
flatMapLatest cancels the previous inner flow when a new value arrives. If a search for "ab" is still hitting the database when "abc" comes in, it cancels the "ab" query and starts "abc" instead. No wasted work, no stale results leaking through.
Together: debounce reduces the number of emissions, flatMapLatest ensures only the latest emission's work actually completes.
// User types: "a" (0ms) -> "ab" (100ms) -> "abc" (250ms) -> stops
_searchQuery // emits: "a", "ab", "abc"
.debounce(300) // emits: "abc" only (300ms after last keystroke)
.distinctUntilChanged() // skips if same as last (e.g., user types then deletes)
.flatMapLatest { query -> // cancels any in-flight search, starts new one
if (query.isBlank()) {
flowOf(emptyList())
} else {
repository.search(query) // returns Flow<List<Post>> from Room LIKE query
}
}
.collect { results ->
_uiState.update { it.copy(posts = results) }
}
Tip for viewers: Understanding debounce + flatMapLatest is essential for any Android interview at a top MNC. This exact pattern appears in search bars, autocomplete, and form validation across production apps. MUST KNOW
What happens when the user creates a new post but the network fails? How do you handle that?
This is an optimistic update + retry queue pattern.
Step 1: Write to Room immediately with a syncStatus = PENDING flag. The post shows up in the feed instantly — the user sees it right away. We can dim it slightly or show a small "sending..." indicator.
Step 2: Attempt to sync to the server in the background via a coroutine. If it succeeds, update syncStatus = SYNCED and replace the local ID with the server-generated ID.
Step 3: If it fails, mark it as syncStatus = FAILED. Show a retry affordance on the post card. Also enqueue a WorkManager one-time work request with exponential backoff — it will retry even if the user kills the app.
Step 4: On next app launch, check for any PENDING or FAILED posts and attempt to sync them.
class CreatePostUseCase @Inject constructor(
private val dao: PostDao,
private val api: FeedApiService,
private val workManager: WorkManager
) {
suspend operator fun invoke(draft: PostDraft) {
// 1. Optimistic insert into Room
val localPost = draft.toEntity(
id = UUID.randomUUID().toString(),
syncStatus = SyncStatus.PENDING
)
dao.insert(localPost)
// 2. Try to sync immediately
try {
val serverPost = api.createPost(draft.toRequest())
dao.updateSyncStatus(localPost.id, SyncStatus.SYNCED)
dao.updateServerId(localPost.id, serverPost.id)
} catch (e: Exception) {
// 3. Mark failed, enqueue WorkManager retry
dao.updateSyncStatus(localPost.id, SyncStatus.FAILED)
workManager.enqueueUniqueWork(
"sync_post_${localPost.id}",
ExistingWorkPolicy.KEEP,
SyncPostWorker.buildRequest(localPost.id)
)
}
}
}
- Upload image first to a separate media endpoint — use S3 pre-signed URLs or a dedicated media endpoint to get back a media ID or URL.
- Create the post with the media reference — link the uploaded media to the post creation.
- Decouple upload failures — if the image upload fails, the post creation never fires, avoiding orphaned posts.
- User feedback is clear — show a progress indicator and allow independent retry of the image upload.
- For large images, use WorkManager — with chunked upload support and progress tracking via
setProgress().
- For a feed app, the server is the authority — the server is the source of truth for conflict resolution.
- Use last-write-wins with server-side timestamp — the server compares timestamps and keeps the most recent version.
- Client always defers to server — after sync, the client accepts whatever the server returns as the authoritative state.
- Simple strategy is sufficient for feeds — more sophisticated conflict resolution like CRDTs or operational transforms would be overkill for a social feed.
- Unit tests: Test the ViewModel with a fake
FeedRepository, no Android framework needed — just plain Kotlin + coroutines tests. Verify thatrefresh()setsisRefreshing = truethenfalse, and verify error handling. - Repository tests: Use an in-memory Room database and a mock Retrofit service (or MockWebServer). Test that
loadNextPage()writes to Room, thatobserveFeed()emits after insert, and that cursor logic works. - UI tests: Use Compose testing with
createComposeRule. Inject a fake ViewModel state and verify the LazyColumn renders the correct number of items, shows loading indicators, etc.
- ViewModel survives configuration changes but not process death — screen rotation keeps the ViewModel alive, but process death kills it.
- Use SavedStateHandle for process death — add it to the ViewModel constructor to persist small amounts of UI state across process death.
- Persist UI metadata, not data — save the current search query, active filter, and scroll position via SavedStateHandle.
- Post data is already persisted in Room — when the ViewModel re-initializes after process death, it re-observes Room and gets the cached feed back immediately.
- No need to restore the feed itself — only restore UI metadata, and the feed will be repopulated from the database automatically.
Let's add a Like button to each post. When the user taps it, the like count increments and the heart fills in. Walk me through how you'd implement this end to end.
This is another optimistic update problem — same pattern as post creation, but simpler. The user expects instant feedback when they tap the heart. We cannot wait for a network round-trip. Here is the approach:
Step 1: Update Room immediately. Toggle the isLikedByMe flag and increment/decrement likeCount in the local database. Since the UI observes Room via Flow, the heart fills in and the count updates instantly.
Step 2: Fire-and-forget API call. Send the like/unlike request to the server in the background. No loading spinner, no blocking.
Step 3: Handle failure. If the API call fails, revert the local state — toggle isLikedByMe back and adjust the count. Show a subtle toast or snackbar: "Couldn't like this post. Try again."
Step 4: Deduplication. If the user taps like/unlike rapidly, we debounce the API calls. Only the final state gets sent to the server. This avoids flooding the backend with toggle requests.
First, I need to add isLikedByMe to the entity:
@Entity(tableName = "posts")
data class PostEntity(
@PrimaryKey val id: String,
val authorName: String,
val authorAvatar: String,
val content: String,
val imageUrl: String?,
val likeCount: Int,
val commentCount: Int,
val shareCount: Int,
val isLikedByMe: Boolean = false, // NEW
val createdAt: Long,
val cursor: String,
val trendingScore: Double = 0.0
)
Now the UseCase. This is where the optimistic update + rollback logic lives:
class ToggleLikeUseCase @Inject constructor(
private val dao: PostDao,
private val api: FeedApiService
) {
suspend operator fun invoke(postId: String) {
// 1. Read current state from Room
val post = dao.getById(postId) ?: return
val nowLiked = !post.isLikedByMe
val newCount = if (nowLiked) post.likeCount + 1
else post.likeCount - 1
// 2. Optimistic update — UI reacts instantly
dao.updateLike(postId, nowLiked, newCount)
// 3. Sync to server
try {
if (nowLiked) {
api.likePost(postId)
} else {
api.unlikePost(postId)
}
} catch (e: Exception) {
// 4. Rollback on failure
dao.updateLike(postId, post.isLikedByMe, post.likeCount)
throw e // let ViewModel handle the error UI
}
}
}
@Query("SELECT * FROM posts WHERE id = :postId")
suspend fun getById(postId: String): PostEntity?
@Query("""
UPDATE posts
SET isLikedByMe = :liked, likeCount = :count
WHERE id = :postId
""")
suspend fun updateLike(
postId: String,
liked: Boolean,
count: Int
)
In the ViewModel, wiring the like action is one function:
// Add to constructor:
private val toggleLike: ToggleLikeUseCase
fun onLikeClicked(postId: String) {
viewModelScope.launch {
try {
toggleLike(postId)
} catch (e: Exception) {
_uiState.update { it.copy(
error = "Couldn't like this post. Try again."
) }
}
}
}
// No need to manually update UI — Room Flow emits
// the updated post automatically after dao.updateLike()
And in Compose, the PostCard gets a callback:
@Composable
fun PostCard(
post: Post,
onLikeClick: (postId: String) -> Unit
) {
// ... other post content ...
Row(
verticalAlignment = Alignment.CenterVertically,
modifier = Modifier
.clickable { onLikeClick(post.id) }
.padding(8.dp)
) {
Icon(
imageVector = if (post.isLikedByMe)
Icons.Filled.Favorite
else
Icons.Outlined.FavoriteBorder,
contentDescription = "Like",
tint = if (post.isLikedByMe)
Color.Red
else
Color.Gray
)
Spacer(Modifier.width(4.dp))
Text(
text = "${post.likeCount}",
style = MaterialTheme.typography.labelMedium
)
}
}
What the like interaction looks like on device
- Debounce the API call, not the UI update — the local Room update happens on every tap so the UI is always responsive.
- Network request is debounced — only the final state is sent after 500ms of no taps.
- Implementation uses job cancellation — maintain a
Map<String, Job>in the UseCase or a dedicated LikeSync manager. - Each new toggle cancels pending work — when a new toggle arrives, cancel the pending job for that post ID and start a new delayed coroutine.
- Check final state before syncing — when the delay completes, query the current state in Room and sync only that to the server.
- Result is optimal — user taps 10 times with instant UI feedback, but only 1 API call is made with the final like/unlike state.
- Google's current recommendation is StateFlow — all UI state, including transient errors, lives in one
FeedUiStatedata class. - Clear semantics with StateFlow — the Compose screen checks
state.error, shows a Snackbar, then callsviewModel.clearError(). - SharedFlow/Channel loses events during rotation — if the event is emitted while the UI is not collecting (e.g., during rotation), it gets lost.
- Buffered channels introduce subtle bugs —
ChannelwithBUFFEREDcan help, but it introduces a different class of bugs around event ordering and consumption. - StateFlow is simpler and safer — for most cases, putting errors in StateFlow and clearing after display is the right approach.
- Reserve SharedFlow for navigation — only use SharedFlow/Channel for truly ephemeral events like navigation commands.
- Optimistic count is always an approximation — it reflects the local user's action, not the server truth.
- Next refresh brings authoritative data — on pull-to-refresh or background poll, the server returns the true count and
isLikedByMeflag. - Room updates automatically — when fresh data arrives, the database gets updated and the UI corrects itself.
- Real-time accuracy requires WebSockets — you could add WebSocket or Server-Sent Events for live count updates.
- Slight staleness is acceptable for feeds — for most social feeds, a few seconds of stale counts are unnoticeable to users.
- Facebook itself does this — Facebook's own app shows stale counts until the next scroll or refresh, and users never notice.
You've mentioned optimistic updates twice now — for post creation and likes. Can you talk about the broader consistency model here?
This entire architecture is built on eventual consistency, not strong consistency. Here is what that means practically:
At any given moment, the local Room database and the remote server may disagree. The user might see 235 likes locally but the server has 238 because three other users liked the post in the last 10 seconds. That is fine. When the feed refreshes, the server sends the authoritative state and Room converges.
The contract is: the client is always eventually correct, but never guaranteed to be immediately correct.
Where we use eventual consistency in this app:
- Like counts: Optimistic local increment, server corrects on next sync. Off by a few is acceptable — users do not notice if they see "235 likes" vs "238 likes" for 30 seconds.
- Post creation: The post appears locally with
PENDINGstatus. The server may take a few seconds to process it (image upload, content moderation, etc.). Once synced, the local record is updated with the server ID andSYNCEDstatus. - Feed ordering: The local feed might be stale by a few minutes. Pull-to-refresh or background polling brings it up to date. We do not need WebSockets for a feed — the cost is not worth the marginal freshness gain.
- Deleted posts: If another user deletes a post, we might still show it until the next refresh. The server returns a 404 or omits it from the response, and our local copy gets cleaned up.
Why not strong consistency? Because it means blocking the UI on every network call. The user taps "like" and waits 200ms-2000ms for a spinner before the heart fills. That feels broken. Users care about responsiveness more than perfect accuracy on social metrics.
The trade-off: We accept momentary staleness in exchange for instant UI feedback. The worst case is the user sees a count that is off by a small amount for a short time. The best case — which is most of the time — is that the optimistic update was correct and the server confirms it.
- Financial transactions are critical — if the user transfers money, you cannot optimistically show "Transfer complete" and then roll it back.
- Banking apps use strong consistency — they block the UI, show a spinner, wait for server confirmation, and only then update the UI.
- E-commerce checkout requires strong consistency — you do not optimistically decrement inventory and show "Order placed" without server confirmation.
- Real-world consequences demand precision — if getting it wrong moves money, places an order, or updates a medical record, use strong consistency.
- Eventual consistency is fine for metrics — if getting it wrong just means a stale number on screen for a few seconds (likes, comments, read receipts), eventual consistency is acceptable.
Let's do a quick rapid fire. Short answers, show you can think on your feet.
Q1: How do remote API data and local Room data get married together? Have you used RemoteMediator?
RemoteMediator is part of Paging 3 library. It sits between the PagingSource (Room) and the network. When Paging detects that Room has run out of cached data, it calls RemoteMediator.load() which fetches the next page from the API, writes it into Room, and then Paging reads from Room again.
The flow is: UI requests page -> Paging checks Room -> Room is empty -> RemoteMediator fetches from API -> writes to Room -> Paging reads from Room -> UI renders.
In our current design, we wrote this logic manually in the Repository. RemoteMediator formalizes the same pattern with built-in support for REFRESH, PREPEND, and APPEND load types. For a production app at scale, I would migrate to Paging 3 + RemoteMediator because it also handles boundary callbacks, placeholder items, and load state tracking out of the box.
Q2: Does the current design handle screen rotation? How exactly?
Yes, fully. Three layers protect us:
- ViewModel survives rotation. It is scoped to the Activity lifecycle via
ViewModelStore. When the Activity is destroyed and recreated on rotation, the same ViewModel instance is returned. - StateFlow caches the latest value. When Compose resubscribes after recreation,
collectAsStateWithLifecycle()immediately gets the current state. No re-fetching, no flash of empty screen. - Room persists data to disk. Even in the extreme case of process death (not just rotation), the feed data is already on disk. The ViewModel re-initializes, observes Room, and the UI is populated from cache.
Scroll position is handled by rememberLazyListState() in Compose, which saves and restores via the rememberSaveable mechanism internally.
Q3: Quick — write a thread-safe singleton in Java. Not Kotlin object, actual Java.
Double-checked locking with volatile:
public class NetworkUtils {
private static volatile NetworkUtils INSTANCE;
private NetworkUtils() {
// private constructor prevents external instantiation
}
public static NetworkUtils getInstance() {
if (INSTANCE == null) { // first check (no lock)
synchronized (NetworkUtils.class) {
if (INSTANCE == null) { // second check (with lock)
INSTANCE = new NetworkUtils();
}
}
}
return INSTANCE;
}
}
Why volatile? Without it, the JVM can reorder the constructor — another thread might see a non-null reference to a half-constructed object. volatile prevents that reordering. Why double-checked? The outer null check avoids the synchronized overhead on every call after initialization. The inner check ensures only one thread creates the instance.
Q4: What if the singleton needs a parameter — like a Context? You cannot pass it to getInstance() every time. Show me.
Same double-checked locking, but with a parameter on first call. In Kotlin this is cleaner with a reusable delegate:
public class DatabaseHelper {
private static volatile DatabaseHelper INSTANCE;
private final Context appContext;
private DatabaseHelper(Context context) {
this.appContext = context.getApplicationContext();
}
public static DatabaseHelper getInstance(Context context) {
if (INSTANCE == null) {
synchronized (DatabaseHelper.class) {
if (INSTANCE == null) {
INSTANCE = new DatabaseHelper(context);
}
}
}
return INSTANCE;
}
}
open class SingletonHolder<out T, in A>(
private val creator: (A) -> T
) {
@Volatile private var instance: T? = null
fun getInstance(arg: A): T =
instance ?: synchronized(this) {
instance ?: creator(arg).also { instance = it }
}
}
// Usage:
class DatabaseHelper private constructor(context: Context) {
companion object : SingletonHolder<DatabaseHelper, Context>(
::DatabaseHelper
)
}
// Call: DatabaseHelper.getInstance(context)
Key detail: Always store context.getApplicationContext(), never the Activity context. Storing an Activity context in a singleton leaks the entire Activity — the GC cannot collect it because the singleton (which lives forever) holds a reference.
Q5: You are using Room as the source of truth. What happens when you ship a schema change and the migration fails on a user's device?
If a Room migration fails and you have no fallback, the app crashes on startup — the database cannot be opened. There are two strategies:
- Destructive fallback: Call
.fallbackToDestructiveMigration()on the database builder. Room wipes the database and recreates it from scratch. You lose cached data but the app works. For a feed app, this is acceptable — the data is re-fetched from the server on next launch. You would NOT do this for an app with user-generated local data (notes app, offline-first todo list). - Manual migrations: Write explicit
Migration(1, 2)classes with ALTER TABLE statements. Test them withMigrationTestHelperin instrumented tests. This is the safe production approach.
In practice, I use both: manual migrations as the primary path, with destructive fallback as the safety net. If the migration has a bug, at least the app opens instead of crashing in a loop.
Q6: LazyColumn or RecyclerView? Which one and why?
LazyColumn for new code. It eliminates the Adapter, ViewHolder, DiffUtil, LayoutManager boilerplate entirely. With items(key = { ... }), it handles stable IDs and diffing automatically. animateItemPlacement() gives free reorder animations. And since the rest of our UI is Compose, mixing in a RecyclerView would mean maintaining an interop layer for no reason.
The only exception: if you have an existing large RecyclerView with complex custom ItemDecorations, ItemAnimators, and ItemTouchHelpers that would be painful to rewrite, keep it and wrap it in AndroidView inside Compose. Do not rewrite working code just for the sake of Compose purity.
Q7: A coroutine inside viewModelScope.launch throws an unhandled exception. What happens?
viewModelScope uses SupervisorJob + Dispatchers.Main.immediate. Because of the SupervisorJob, one child coroutine failing does NOT cancel sibling coroutines. But the exception still propagates to the CoroutineExceptionHandler — and if you have not installed one, it crashes the app via the default uncaught exception handler.
Fix: either wrap the body in try/catch, use runCatching, or install a CoroutineExceptionHandler on the launch. For our feed, every coroutine that touches the network has a try/catch that updates _uiState with an error message instead of crashing.
Q8: Why Hilt over Koin or manual DI?
Hilt is compile-time. If a dependency is missing, you get a build error, not a runtime crash in production. Koin is runtime — the graph is resolved when you call get() or inject(). A missing binding only blows up when that screen is opened by a real user.
Hilt also integrates natively with ViewModel, WorkManager, Navigation, and Compose via @HiltViewModel and hiltViewModel(). No manual factory boilerplate.
Trade-off: Hilt has more annotation processing overhead (slower incremental builds) and a steeper learning curve with its component hierarchy. For smaller apps, Koin's simplicity might win. For a team-scale app like this feed, compile-time safety is worth the build cost.
Q9: You mentioned 60fps smooth scrolling earlier. Where does the "16ms per frame" number come from? What is jank?
The display refreshes at 60Hz — that is 60 frames per second. 1000ms / 60 = ~16.6ms per frame. The system has to measure, layout, and draw every frame within that budget.
Android uses a VSync signal — a hardware tick from the display that says "time to draw the next frame." The Choreographer listens to VSync and schedules the work: input handling, animation ticks, view traversal (measure/layout/draw). If all of that finishes within 16ms, the frame is delivered on time. Smooth.
Jank is what happens when a frame misses the deadline. The Choreographer could not finish the work before the next VSync tick, so the display shows the old frame again. The user sees a stutter or hitch — dropped frame. Common causes: doing disk I/O or network on the main thread, inflating heavy layouts, GC pauses from excessive allocations, or nested layouts causing exponential measure passes.
For our feed: Compose's lazy composition + image loading on a background thread (Coil/Glide) + no main-thread database access (Room enforces this) keeps us within budget on most frames.
Tip for viewers: You don't need to memorize Choreographer internals. Know the 16ms budget, what jank means, and the common causes. That is enough for 90% of interviews. GOOD TO KNOW
That was a really strong session. You started with clear requirements, made deliberate architecture choices and justified them well, wrote clean production code, and handled the follow-ups with depth. The way you walked through the optimistic update pattern and eventual consistency showed you've dealt with real-world trade-offs, not just textbook answers.
I especially liked that you introduced UseCases proactively as the app grows — that shows you think about maintainability, not just getting it to work.
We'll be in touch. Thanks for your time.
Thank you — I enjoyed the discussion. Looking forward to hearing from you.
End of Interview
You covered: Requirements, MVVM Architecture, SOLID, Room + Retrofit, Cursor Pagination, Flows (hot/cold), UseCases, Compose UI, Debounce + FlatMapLatest, Filters, Top-K DSA, Like Feature, Optimistic Updates, Eventual Consistency, Singletons, VSync/Jank, and DI with Hilt.
That is a complete Android System Design round for any top MNC.