Technology Sharing

Game Development Interview Question 7

2024-07-08

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

What does the ArrayList data structure look like underneath?

ArrayList is implemented based on arrays. It does this by dynamically expanding or reducing the size of the array. When the capacity is insufficient, it creates a larger array, copies the original data, and then adds the new data.

The mechanism of ArrayList expansion is: when the first element is added, the capacity of ArrayList is 10; each time a new element is added, if it exceeds the capacity, the original capacity will be doubled, that is, the original capacity*2. If the original capacity is 0, the new capacity will be 1.

Advantages and disadvantages of ArrayList expansion mechanism:

advantage:
  1. The ArrayList expansion mechanism is simple to understand and easy to implement.
  2. The capacity after expansion must be greater than the original capacity, which can reduce the number of array copies when adding new elements and improve the efficiency of adding elements.
shortcoming:
  1. The ArrayList expansion mechanism leads to waste of memory and easily causes waste of memory space.
  2. When the capacity is large, each expansion will consume a lot of memory and CPU resources, which will affect performance. Therefore, when using ArrayList, you need to set the initial capacity reasonably.
ArrayList is not thread safe

The internal implementation of ArrayList is based on arrays. When multiple threads access the same ArrayList at the same time, data inconsistency may occur. For example, when one thread is reading the data of ArrayList and another thread is adding/deleting data to ArrayList, the data in ArrayList may change. In this way, the thread reading the ArrayList data may read incorrect data, causing program errors.

Common methods to solve ArrayList thread insecurity are:
  1. Use Vector instead of ArrayList: Vector is synchronized, and all methods are modified by synchronized, so it is thread-safe;
  2. Wrap ArrayList to be thread-safe: Use Collections.synchronizedList(list) method to convert ArrayList into thread-safe;
  3. Use CopyOnWriteArrayList: It is a thread-safe and efficient collection. Its write operation is implemented by copying the underlying array. The overhead of this implementation is within an acceptable range. It is suitable for concurrent scenarios with more reads than writes.
  4. Use locks to achieve thread safety: You can use lock mechanisms such as ReentrantLock to achieve thread-safe ArrayList.

Difference between stack and heap

  • A stack is a special linear table, whose characteristic is that data can only be inserted and deleted at one end, following the principle of first-in, last-out and last-in, first-out. It is a storage structure that can be used to store function parameter values, local variables, etc.

  • A heap is a special tree structure, whose characteristic is that the values ​​of all nodes are greater than or equal to the values ​​of their child nodes, and the value of its root node is the largest or smallest. A heap is a dynamic storage structure that can be used to store large amounts of data, such as sorting and searching.

Coroutine bottom layer

The essence of a coroutine is a lightweight thread. Each coroutine has a stack to store functions and their parameters, local variables, etc. The coroutine can be suspended, resumed, and switched, which is the basis for implementing coroutines.

C# GC (Garbage Collection) Principle

  1. Reference count: When an object is referenced, its reference count is increased by 1, and when the reference is invalid, its reference count is reduced by 1. If the reference count of an object becomes 0, the object will be reclaimed by the garbage collector.
  2. Mark-sweep: When the garbage collector is running, it traverses the objects according to the reference relationship, marks the accessible objects as "alive", marks the inaccessible objects as "dead", and then clears all objects marked as "dead".
  3. Copy algorithm: The garbage collector divides the available memory into two blocks and uses only one of them at a time. When there is insufficient space, the surviving objects are copied to another space, and then the copied objects are cleared from the original space.
  4. Mark-Compact Algorithm: When the garbage collector is running, it will first mark all surviving objects, and then move all surviving objects to one end to clear unused space fragments.

The difference between state synchronization and frame synchronization

State SynchronizationIt means that in each control cycle, the state (such as position, speed, acceleration, etc.) of each machine in the multi-machine system is transmitted to other machines so that each machine is synchronized. State synchronization can achieve real-time multi-machine collaborative control, but because a large amount of data needs to be transmitted in each control cycle, its accuracy may be relatively low.

Frame SynchronizationIt means that in each control cycle, the control commands of each machine in the multi-machine system are transmitted to other machines so that each machine is synchronized. Frame synchronization can achieve the accuracy of multi-machine collaborative control, but since only a small number of control commands are transmitted in each control cycle, its real-time performance may be relatively low.

Common design patterns

Singleton Pattern
Factory Pattern
Composite Pattern
Proxy Pattern

Characteristics of linked lists, binary trees, and hash tables

1. Linked list:
  • It is a linear list structure, whose characteristic is that each node has a pointer pointing to the next node, thus forming a linked list;
  • Whether inserting or deleting a node, you only need to change the pointer, and the time complexity is O(1);
  • The time complexity of searching for a node is O(n), and you need to search sequentially from the first node;
  • Linked lists do not need to consider space allocation issues and generally use dynamic memory allocation to achieve flexible memory management.
2. Binary Tree:
  • A binary tree is a tree structure where each node has at most two children;
  • The time complexity of the search, insertion and deletion operations of a binary tree is O(log2n), O(log2n) and O(log2n) respectively;
  • Since each node of the binary tree is not stored continuously but distributedly by level, and each node can only have two children, the storage space can be used more efficiently.
3. Hash table:
  • A hash table is a data structure that maps a key to a location in a table to access records, speeding up lookups;
  • The search time complexity of a hash table is O(1), and the insertion and deletion time complexity is O(n);
  • The implementation of a hash table requires additional space to store the hash table itself and to resolve hash collision issues.

HashMap underlying principle

HashMap is implemented in an array linked list (red-black tree) at the bottom. It stores data based on the hashCode value of the key. The position of the data in the array can be calculated based on the hash code (hash conflict), and the linked list (red-black tree) is used to store the conflicting data. In Java 8, when the linked list length exceeds the threshold (the default is 8), HashMap will be converted into a red-black tree to improve query efficiency. When the capacity is insufficient, it will automatically expand. The default load factor is 0.75, and the expansion method is twice the capacity.

How to determine if a linked list has a cycle

  1. Use a hash table to traverse each node in the linked list and store the node's address in the hash table. If the current node already exists in the hash table, it means that the linked list has a loop.
  2. Define two pointers, a slow pointer that moves one step at a time, and a fast pointer that moves two steps at a time. If the fast pointer encounters the slow pointer, it means that the linked list has a cycle.

What are the usage scenarios of stacks and queues?

The browser's forward and backward functions: The web pages visited by the browser can be realized through the stack data structure to achieve forward and backward functions.

Do you know about the TCP sticky packet problem?

The TCP packet sticking problem refers to the fact that the TCP protocol does not fragment data when transmitting data, resulting in the amount of data received by the receiving end being greater than the amount of data sent by the sending end.

Ways to solve the TCP sticky packet problem are:
  1. Add a delimiter at the sending end: Add a delimiter at the sending end. After the receiving end receives the delimiter, it can split the data according to the delimiter.
  2. Adding a header at the sending end: The sending end adds a header before sending data. The header contains the length information of the current data. The receiving end splits the data according to the length information in the header.
  3. Add a buffer at the sending end: Before sending data, the sending end puts the data into the buffer first, and only sends a part of the data in the buffer each time. After the receiving end receives the data, it determines whether the data has been sent based on the total length of the data.

How to implement a simple TCP using UDP

First of all, UDP datagrams can help implement the three-way handshake process in the TCP/IP protocol. In the first handshake, a client sends a UDP datagram containing a handshake request. When the server receives this message, it will reply with a confirmation message, indicating that the server has received the client's handshake request and is ready to provide services. In the second handshake, the client will send a UDP datagram again. This time the message contains some useful information, such as the client's IP address, port number, etc., so that the server can identify the client. In the third handshake, the server will send a UDP datagram, indicating that the connection has been established and the client can start sending data.

Secondly, UDP datagrams can also help implement the data transmission process in the TCP/IP protocol. When the client needs to send data to the server, it will encapsulate the data into a UDP datagram and send it to the server; after receiving the UDP datagram, the server will parse the data contained in the message and perform relevant processing.

Finally, UDP datagrams can also help implement the end-of-connection process in the TCP/IP protocol. When the client no longer needs to communicate with the server, it can send a UDP datagram to indicate that the client wants to end the connection. After the server receives this message, it will release the corresponding resources, thus completing the entire TCP/IP protocol connection process.

Have you used coroutines? Why do you use coroutines? Why do coroutines make things faster?

Coroutines allow programs to switch between different tasks, thereby improving program efficiency and reducing program running time. Coroutines allow programs to switch between multiple tasks instead of waiting for one task to complete before starting another. It can also share variables between different threads, thereby reducing program running time. For multi-tasking applications, using coroutines can significantly improve performance, resulting in faster running speeds.

Which is faster, traversing an array or a linked list of the same length? Why?

Arrays are faster because the address of each element in an array is continuous and fixed, so the address of the next element can be quickly obtained. However, the address of each element in a linked list is discontinuous, so it is necessary to traverse the pointer to obtain the address of the next element, so array traversal is faster.

Let’s talk about virtual functions. Why do we need virtual functions?

A virtual function is a special function. The difference between it and a normal function is that it is automatically defined by the compiler and can be called at compile time. The characteristic of a virtual function is that its implementation is determined at runtime, not at compile time.
The main purpose of virtual functions is to achieve polymorphism. An abstract class can define multiple virtual functions, which are then implemented by its subclasses.

Can a destructor be a virtual function? Does it have to be a virtual function? Why? What are the problems if it is not a virtual function?

It does not have to be a virtual function, but it is generally recommended to use virtual functions because virtual functions can be overridden by derived classes, so that the destructor of the derived class can be executed correctly. If virtual functions are not used, the destructor of the derived class will not be called, which may cause memory leaks and other problems.

Introducing the Rendering Pipeline

The rendering pipeline is a series of steps that transform the game scene data from input information to the image displayed on the screen.

The rendering pipeline process is divided into three main stages: preparation stage, geometry stage, and lighting stage.

In the preparation phase, the game engine loads the game scene's models and textures into the graphics processing unit (GPU) and organizes the data for use in subsequent stages.

During the geometry phase, matrix transformations are used to place the model in three-dimensional space and convert it into a form that can be supported by pixels on the screen.

In the lighting stage, the light source and lighting model are used to calculate the color value of each pixel, and the resulting image is finally displayed on the screen.

Talk about the insertion, query, and deletion operations of a binary search tree, and what is the time complexity

  1. insert:
  • Time complexity: O(log n)
  • Steps:
  1. The node to be inserted is regarded as a new leaf node, starting from the root node;
  2. If the value of the node to be inserted is smaller than the current node value, go to the left child node of the current node;
  3. If the value of the node to be inserted is greater than the current node value, go to the right child node of the current node;
  4. If the current node has no child nodes, the node to be inserted will be the child node of the current node;
  5. Otherwise, repeat steps 2 to 4 until a node without child nodes is found, and make the node to be inserted as the child node of this node.

What are the conditions for the greedy algorithm to obtain the optimal solution?

The conditions for the greedy algorithm to obtain the optimal solution are "optimal substructure" and "greedy selection property":

  1. Optimal substructure: The optimal solution to a problem contains the optimal solutions to its subproblems;
  2. Greedy selection property: At each step, a local optimal choice is made, and the final result is the global optimal solution.