Consumer Repurchase Behaviors of Smartphones

Theoretical Background

Theory of Reasoned Action (TRA)

One of the most important areas of consumer psychology and behavior research is the relationship between consumer attitudes and behaviors. One theory explaining consumer attitudes and intentions to use a product is the theory of reasoned action (TRA). The TRA suggests that consumers carefully consider the consequences of various behaviors before acting. In other words, consumer behavior is under voluntary control; thus, consumer behaviors can be predicted via their intentions. In addition, the TRA considers subjective norms, in comparison with other models that explain consumer attitudes. Consumers consider the costs of performing their actions and the benefits that may arise as a result of the action before choosing the action that is the most beneficial/least costly.


Heuristics Theory

Heuristic thinking refers to intuitive thinking through experience, rather than analyzing conclusions based on rational thinking; in other words, it involves bias. A heuristic involves satisfaction with "bounded rationality" rather than pursuit of an impossible real rationality. According to heuristics theory, many consumers make decisions based on habits, beliefs, or by following others' decisions, as these approaches are simpler and avoid complications.

This study adopted the heuristics theory as a basis from which to study consumer habit.

Artificial Neural Network (ANN)

An artificial neural network (ANN) can be defined as an array of highly connected basic processors called neurons. As shown in Figure 1, the multilayer perceptron (MLP) has the same hierarchical structure as a neural network, with at least one intermediate layer between the input layer and the output layer. The MLP has a structure similar to a single-layer perceptron, but it improves the network ability by nonlinearizing the input and output characteristics of the intermediate layer and of each unit, thus overcoming the various disadvantages of the single-layer perceptron. In other words, as the number of layers increases, the properties of the MLP are more enhanced.

Figure 1. Multilayer perceptron structure.

Figure 1. Multilayer perceptron structure.

X1, X2, and X3 have weights W1, W2, and W3 associated with these inputs. The output Y of the neuron is calculated as shown in Figure 2. The function f is nonlinear and is called the activation function. The purpose of the activation function is to introduce nonlinearity into the output of a neuron, which is important because most real-world data are nonlinear.

Figure 2. Single neuron.

Figure 2. Single neuron.

In mathematical terms, the neuron k depicted in Figure 3 can be described by the following equations:

 u_k  =  \sum ^m _ {j=1} w_{kj}x_j,

y_k=\varphi\left(u_k+b_k\right)

where φ() is the activation function,  u_k is the linear combiner output for the input signals, b_k is the bias, and  y_k is the output signal of the neuron.

Figure 3. Research model.

Figure 3. Research model.