LunaNotes

Iris Java Developer Interview Subtitles & Questions Download

Iris Java Developer Interview Experience & Questions [ 14 LPA+ ]

Iris Java Developer Interview Experience & Questions [ 14 LPA+ ]

GenZ Career

274 segments EN

SRT - Most compatible format for video players (VLC, media players, video editors)

VTT - Web Video Text Tracks for HTML5 video and browsers

TXT - Plain text with timestamps for easy reading and editing

Subtitle Preview

Scroll to view all subtitles

[00:00]

Hey everyone, recently one of our

[00:02]

subscribers Krishna Kumar cracked Java

[00:04]

developer interview at Iris software. He

[00:06]

has shared everything with me. So

[00:08]

basically he had applied through Iris

[00:10]

website and he's having total 3 years of

[00:13]

experience. I'll be sharing his

[00:14]

technical round experience only guys and

[00:17]

if you want to share your interview

[00:19]

experience then fill the form below in

[00:21]

the description and please make sure to

[00:23]

subscribe to see such real interview

[00:24]

experience. So now let's get started.

[00:27]

First interview asked to explain your

[00:29]

current project flow from API request to

[00:32]

database. What part of the systems do

[00:34]

you own completely? What was the last

[00:37]

production bug you fixed and how did you

[00:39]

debug it? What design decisions in your

[00:42]

project didn't scale well initially. So

[00:44]

guys you have to answer these type of

[00:46]

question as per your experience level.

[00:49]

Then he asked to give a real example

[00:51]

where inheritance cause problem. So

[00:53]

giving you as per of my project

[00:55]

experience we had a base user class and

[00:58]

many subasses like admin user and

[01:00]

customer user later a small change in

[01:03]

base user validation broke multiple

[01:05]

child classes unexpectedly. It creates

[01:08]

tight coupling made debugging hard and

[01:10]

forced changes across many modules at

[01:13]

the same time. Then interviewer asks

[01:15]

where did you use composition and why?

[01:17]

So you can say in my project I use

[01:19]

composition by building services using a

[01:22]

smaller components like order service

[01:24]

using payment service and notification

[01:26]

service. Instead of extending classes I

[01:28]

injected dependencies. This made the

[01:31]

code loosely coupled easy to test and

[01:34]

flexible because uh we could replace or

[01:37]

modify one component without affecting

[01:39]

the whole design. Then they ask how did

[01:41]

our wrong equals and hash code

[01:43]

implementation break something. So you

[01:46]

can say we use a user object as a key in

[01:49]

a hashmap but equals compared user ID

[01:51]

while hash code use name. Because of

[01:53]

this mismatch the same user got stored

[01:55]

as multiple keys and lookup failed. It

[01:58]

caused duplicate caches entries and

[02:00]

incorrect data being returned. Then

[02:02]

interviewer asked why did you avoid

[02:04]

using oop and keep things simple. So you

[02:07]

can say in my project I avoided heavy

[02:09]

oops when the requirement was simple

[02:11]

like converting a request to a response

[02:13]

or doing a small validations. Instead of

[02:15]

creating many classes and abstractions I

[02:18]

use simple utility methods and clear

[02:20]

details. It reduced complexity and

[02:22]

improved readability. Then interviewer

[02:24]

asked why did you choose map instead of

[02:26]

list in one real scenario. So you can

[02:29]

say in one of the module I needed to

[02:31]

fetch customer details quickly using

[02:33]

customer ID. If I used a list, I would

[02:35]

have to loop every time to find the

[02:37]

match. So I use a map with a customer ID

[02:40]

as the key which give fast open lookup

[02:42]

and improve the performance. Then

[02:44]

interviewer asked when did hashmap

[02:46]

become a performance issue. So hashmap

[02:50]

becomes a performance issue when your

[02:52]

operations stop being average and start

[02:55]

degrading usually due to many

[02:57]

collisions. This happens with a bad hash

[03:00]

code, weak key distribution or an

[03:02]

undersize mapped with frequent

[03:04]

rehashing. Then lookups drift toward on

[03:07]

and latency spikes. Then interview

[03:09]

asked, "Have you ever replaced one

[03:11]

collection with another for

[03:13]

optimization?" So you could say yes when

[03:16]

profiling showed the collection choice

[03:18]

was the bottleneck. For example, I

[03:20]

replaced an array list with an link list

[03:23]

for heavy middle insertions and swapped

[03:26]

a hashmap for an enum map when keys were

[03:28]

enums. The goal was fewer allocations,

[03:31]

faster lookups and predictable

[03:33]

performance. And before moving to the

[03:35]

next question guys, I would like to

[03:37]

share one important thing with you.

[03:39]

Actually Krishna prepared from our

[03:41]

interview preparation kit. So let me

[03:43]

tell you this kit has four main parts.

[03:46]

First is complete interview preparation

[03:48]

material. It is a step-by-step material

[03:50]

made by me expert and MNC's interviews.

[03:52]

99% of the questions asked in interviews

[03:55]

are covered in it. Second is two real

[03:57]

enterprise client projects code and

[03:59]

video recorded sessions are there and

[04:02]

you can add this in your resume. Third

[04:04]

is lifetime chat support. Here you can

[04:06]

ask your doubts anytime. Fourth is

[04:08]

referral support. Here we help you get

[04:11]

referred to the top MNC's. So basically

[04:13]

the material is organized as per your

[04:16]

experience level and covers Java, Spring

[04:18]

Boot, Spring Security, Spring Data, JPA,

[04:20]

microservices, Kafka, Maven, Gate coding

[04:23]

questions, stream API coding questions

[04:24]

and many more. You can buy just the

[04:26]

complete interview preparation material

[04:28]

or the full interview preparation kit

[04:30]

with project supports and referrals. I

[04:33]

have added the links in the description

[04:34]

below. So now moving to our interview

[04:36]

experience. Then interview asked to

[04:39]

explain a bug caused by modifying a

[04:41]

collection while iterating. So a common

[04:43]

bug happens when we loop over a list

[04:46]

with a for each and remove elements

[04:48]

inside the loop. Java detects a

[04:50]

structural change and throws concurrent

[04:52]

modification exception. The correct

[04:54]

approach is using an iterators remove

[04:56]

method or collecting items to delete and

[04:59]

removing them after iteration. Then they

[05:01]

ask why did you need multi-threading in

[05:03]

your project? So we need multi-threading

[05:05]

to improve thoughtput and

[05:06]

responsiveness. Some tasks like calling

[05:09]

external APIs, processing files and

[05:11]

generating reports were independent. By

[05:14]

running them in parallel using thread

[05:16]

pools, we reduce the waiting time,

[05:18]

utilize CPU better and met latency SLAs

[05:21]

under peak load. Then they ask what

[05:24]

issue occur due to shared mutable data.

[05:27]

So we face a raised condition because

[05:29]

multiple threads updated the same

[05:31]

mutable object without proper

[05:33]

synchronization. Sometimes one threads

[05:35]

over wrote another's changes. So

[05:37]

counters were incorrect and data looked

[05:40]

random. We fixed it by using

[05:42]

immutability where possible and atomic

[05:45]

classes or logs for shared state. Then

[05:48]

interviewer asked why didn't synchronize

[05:50]

solve the problems completely. So

[05:52]

synchronize fixed correctness but it

[05:55]

didn't fully solve performance and

[05:57]

design issue. A single log become a

[05:59]

bottleneck under load causing threads to

[06:02]

block and latency to rise. Also syncing

[06:05]

the wrong scope still allowed visibility

[06:07]

issues elsewhere. We improved it with

[06:10]

finer grain locks atomics and reducing

[06:12]

shared state. Then they asked how did

[06:15]

you decide the thread pool size? So we

[06:17]

decided the thread pool size based on

[06:19]

workload type and measured bottlenecks.

[06:22]

For CPUbound task I keep it near the

[06:25]

number of cores to avoid context

[06:26]

switching. For input output bound task

[06:29]

we allow more threads because they spend

[06:31]

the time waiting. Then we validate with

[06:33]

the load test and metrics. Then they ask

[06:36]

where did streams make code worse

[06:38]

instead of better. So streams made code

[06:41]

worse when the logic needed complex

[06:43]

branching early exist or detailed error

[06:45]

handling. The stream pipeline become

[06:48]

long and hard to debug and performance

[06:50]

suffered due to extra allocations. In

[06:53]

those cases, a simple loop was clearer,

[06:55]

faster and easier to maintain. Then

[06:57]

interviewer asked, "Have you faced

[06:59]

performance issues with the streams?" So

[07:01]

yes, you can say especially with large

[07:04]

collections where a stream pipeline

[07:05]

created many temporary objects and

[07:07]

repeated boxing and unboxing. We also

[07:10]

saw overhead from multiple intermediate

[07:12]

operations like map, filter, and

[07:14]

collect. After profiling, we replace hot

[07:16]

paths with plan loops or optimized

[07:18]

collections and got better latency. Then

[07:21]

interviewer asked when did optional

[07:22]

create confusion. So optional created

[07:24]

confusions when developers treated it

[07:26]

like a normal field type and started

[07:29]

passing optional everywhere even storing

[07:31]

it in entities that made an API noisy

[07:34]

and hide real nullability rules. We

[07:37]

teach using optional mainly for return

[07:39]

values not parameters and never for

[07:42]

persistence models. Then they ask how

[07:44]

did you use completable future in real

[07:47]

flow. So you can say I use completable

[07:50]

future to run independent task in

[07:52]

parallel like fetching user details,

[07:53]

order history and recommendations from

[07:56]

different services. Then I combined

[07:59]

results by using all of and then combine

[08:01]

method handled failures with

[08:03]

exceptionally method and returned a

[08:06]

single aggregate response without

[08:08]

blocking request threads. Then they ask

[08:10]

how does spring boot simplify your daily

[08:12]

work? So spring boot simplifies daily

[08:15]

work by removing setup overhead. Auto

[08:18]

configurations and starter gives us

[08:20]

sensible default. So we focus on

[08:22]

business logic instead of wiring means

[08:24]

actuators helps with health checks and

[08:26]

matrix and profiles makes environment

[08:28]

config clean. Overall it speeds

[08:30]

development and reduce production

[08:32]

surprises. Then interviewer asked how do

[08:35]

you handle global exceptions? So you can

[08:38]

say we handle global exceptions by using

[08:40]

controller advice with exceptional

[08:42]

handler methods. This centralized errors

[08:44]

responses keeps controller clean and

[08:46]

ensures consistent HTTP status codes and

[08:49]

messages. We also log with correlations

[08:52]

ids map validations errors clearly and

[08:55]

avoid leaking internal stack traces to

[08:57]

clients. Then interviewer asks how do

[08:59]

you validate request data properly. So

[09:02]

we validate the request data by using

[09:04]

bean validation annotations like not

[09:06]

null size patterns on DTO's and trigger

[09:09]

them with valid annotation in controller

[09:11]

methods for cross field rules. We use a

[09:14]

custom validators and we handle

[09:16]

validation errors globally to return the

[09:18]

clear consistent responses. Then

[09:21]

interviewer asks how do you manage

[09:23]

config differences across environment.

[09:26]

So you could say we manage config

[09:27]

differences using spring profiles and

[09:29]

externalized configuration. Each

[09:31]

environment get its own application

[09:34]

profile version like ML and sensitive

[09:36]

values come from environment variables

[09:39]

or a secret manager. We also keep

[09:40]

default safe use features flags when

[09:42]

needed and verify config via actuator

[09:45]

and startup checks. Then they ask what

[09:48]

checks do you do before deploying code

[09:50]

to the production. So before deploying

[09:52]

to production we run a clear checklist.

[09:54]

All units and integration test must

[09:56]

pass. We review coverage for critical

[09:58]

pass. We check static analysis and

[10:00]

security scans. Validate configs for the

[10:02]

target profile and confirm DB migrations

[10:05]

are safe. Finally we do a smoke test

[10:07]

with staging review logs metrics

[10:09]

dashboards and ensures roll back is

[10:11]

ready. And guys then interviewer asked

[10:13]

two coding question. first to write the

[10:15]

best way of singleton design pattern and

[10:17]

then he asked to implement the caching

[10:20]

by using map. So guys this is all about

[10:22]

iOS software interview experience of

[10:25]

Krishna. Please make sure to check the

[10:27]

interview preparation kit. Thank you.

Download Subtitles

These subtitles were extracted using the Free YouTube Subtitle Downloader by LunaNotes.

Download more subtitles
Buy us a coffee

If you found these subtitles useful, consider buying us a coffee. It would help us a lot!

Let's Try!

Start Taking Better Notes Today with LunaNotes!