Abstract
Background
Agent negotiation is widely used in e-commerce negotiation, cloud service service-level agreements, and power transactions. However, few studies have adapted alternative negotiation models to negotiation processes between healthcare professionals and patients due to the fuzziness, ethics, and importance of medical decision making.
Method
We propose a Bayesian learning based bilateral fuzzy constraint agent negotiation model (BLFCAN). It support mutually beneficial agreement on treatment between doctors and patients. The proposed model expresses the imprecise preferences and behaviors of doctors and patients through fuzzy constrained agents. To improve negotiation efficiency and social welfare, the Bayesian learning method is adopted in the proposed model to predict the opponent’s preference.
Results
The proposed model achieves 55.4% to 64.2% satisfaction for doctors and 69-74.5% satisfaction for patients in terms of individual satisfaction. In addition, the proposed BLFCAN can increase overall satisfaction by 26.5-29% in fewer rounds, and it can alter the negotiation strategy in a flexible manner for various negotiation scenarios.
Conclusions
BLFCAN reduces communication time and cost, helps avoid potential conflicts, and reduces the impact of emotions and biases on decision-making. In addition, the BLFCAN model improves the agreement satisfaction of both parties and the total social welfare.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Shared decision-making (SDM) models actively support healthcare providers and patients regarding information sharing, negotiation, and decision-making activities [1, 2]. In contrast to traditional paternalistic decision-making and informed decision-making processes, SDM is an ideal decision-making model in the medical context because the patients’ right to personal health management and autonomy is respected and realized. For example, when a patient with type 2 diabetes consults a doctor about treatment options, the doctor explains the pros and cons of medication and lifestyle interventions and invites the patient to share their perspective. The patient expresses a preference to minimize medication reliance and favors lifestyle adjustments. In response, the doctor suggests starting with a low-dose medication alongside lifestyle modifications, with regular follow-ups to adjust the plan as needed. Through such communication and collaboration, SDM enables doctors and patients to create a personalized treatment plan that aligns with medical standards and the patient’s lifestyle, giving the patient greater involvement and autonomy in the decision-making process.
Many previous studies have reported the benefits of SDM. For example, Kioko et al. [3] stated that SDM coincides with Pellegrino and Thomasma’s proximate end of medicine, i.e., which is a technically correct and morally good healing decision made for and with a particular patient. In addition, Drake et al. [4] describe the potential benefits of implementing SDM, including ethics, quality, informed decisions, patient satisfaction, enhanced realization of patient self-management, improved adherence, and meaningful outcomes. Overall, SDM processes can improve the increasingly tense doctor-patient relationship.
Despite the diverse prospects for SDM implementation, many barriers remain, including the need for new educational structures, improved communication approaches, and decision-making models tailored to personalized medicine [4]. So far, professional education is well provided in medical school, but SDM requires educational structures that integrate both professional and humanistic competencies for healthcare practitioners. Professional skills include knowledge of diseases and treatments, while humanistic skills focus on communication, empathy, and patient-centered interaction. These competencies empower healthcare providers to build effective SDM-related skills. Improved communication approaches are also essential to ensure clear, two-way understanding between doctors and patients, reducing misunderstandings and facilitating informed choices. Finally, well-designed decision-making models provide structured frameworks that integrate medical evidence with the values and preferences of doctors and patients.
In addition to these needs, other obstacles to effective SDM include insufficient clinical time [5, 6], significant asymmetry in medical information and misinformation [7, 8], limited communication skills, and the influence of personal emotions and biases [9, 10]. These factors can complicate SDM by limiting the quality and clarity of interactions between doctors and patients.
To address these barriers, current SDM approaches involve shared decision-making programs (SDP) [11], patient decision aids (PtDA) [12, 13], SDM skills training [14], and internet-based health information resources [15]. For instance, SDPs are interactive video programs that use electronic devices to guide patients through treatment choices; however, the high production costs limit their use [16]. PtDAs inform patients about treatment options and associated risks and benefits, though their effectiveness depends on information accuracy and adaptability to diverse literacy levels and cultural backgrounds.
Building on these traditional methods, our study proposes a Bayesian Learning-Based Agent Negotiation Model, which offers several advantages for SDM. Conventional SDM models primarily focus on information sharing and patient education, often relying on subjective judgments from physicians. This can make it challenging to efficiently reach personalized treatment plans in complex cases. In contrast, our model facilitates dynamic, autonomous negotiation by adapting in real-time to each agent’s evolving preferences and behaviors. Bayesian learning enables flexible updates without requiring prior data, while the integration of fuzzy numbers enhances preference quantification, making decision-making more robust and broadly applicable.
To further address SDM challenges, we conceptualize the SDM problem as a bilateral treatment selection issue characterized by three key features: (1) only one doctor and one patient are involved in each negotiation, each with unique beliefs and preferences; (2) the problem is treated as a fuzzy constraint decision-making process due to the ambiguity of decision criteria; and (3) it is a dynamic process where both parties influence the final treatment plan through interaction. We adopt an agent system for these interactions, as agent negotiation provides a flexible framework for establishing computational models of autonomous entities to reach agreements [17]. Bayesian learning allows agents to better understand each other’s preferences, thereby enhancing negotiation outcomes through mutual learning.
The primary contributions of this study are summarized as follows.
-
(1)
We propose a Bayesian learning-based agent negotiation model to implement SDM. The proposed model creatively combines advanced SDM principles and agent negotiation technology.
-
(2)
We construct autonomous agents according to the behaviors of doctors and patients behavior, including how to evaluate a proposal, how to offer a proposal, and how to achieve agreement. In addition, we introduce a gradient fuzzy number to represent the imprecise preferences of doctors and patients, which allows us to better model the uncertain and subjective preferences produced by both doctors and patients when expressing medical problems [18, 19].
-
(3)
We employ Bayesian learning techniques to learn the fuzzy membership function of the opponent via a probabilistic approach, i.e., to learn an opponent’s preferences [20,21,22]. Note that this technique does not initially require any information about the opponent or the negotiation history to simulate doctor and patient learning or evaluate an opponent‘s preferences.
The remainder of this paper is organized as follows. “Literature review” section introduces the related work. “Negotiation model for SDM” section details the SDM negotiation model and gives a case study to explain it. “Performance evaluation” section evaluates and discusses our proposed model from different perspectives. “Conclusion” section concludes the paper.
Literature review
Shared decision-making
Shared decision-making conception was advocated from an ethical and clinical point of view to ensure that the medical condition or procedure itself presents no moral problem around 1970s. Since then, Early researches of Shared Decision-Making (SDM) focused primarily on theoretical foundations, e.g. the definitions of SDM, essential elements and SDM models. This theoretical work emphasized patient autonomy and the need for a collaborative approach in healthcare, where patients’ preferences and values are incorporated into treatment decisions alongside provider’s recommendations, fostering a more patient-centered model of care [2, 23].
In recent years, SDM research has increasingly shifted towards practical applications, particularly in clinical fields where treatment decisions are complex and highly individualized, such as chronic disease management, mental health, and oncology [15]. These studies demonstrate the value of SDM in enabling patients to actively participate in selecting long-term treatment options that align with their personal circumstances, leading to improved adherence and patient satisfaction.
To support SDM in practice, tools like Shared Decision-Making Programs (SDP) and Patient Decision Aids (PtDA) have been developed to help patients understand treatment options and the associated risks and benefits.
Additionally, assessment tools have been introduced to measure and improve the SDM process, facilitating meaningful provider-patient interactions and supporting the model’s practical implementation. For example, there are tools available to measure patient involvement in the decision-making process based on the perspectives of both the patient (SDM-Q-9) and the doctor (SDM-Q-Doc), and these tools are used to assess and improve healthcare [24]. Other related tools include the dyadic OPTION instrument, the SDM Scale, DSAT, DSAT-10, and DAS-O [25]. However, SDM must extend beyond simply using tools to provide patients with relevant information, i.e., SDM requires meaningful and mutual engagement and negotiation between doctors and patients [26].
Agent technology
Agents are software entities that simulate human behavior, and they are employed in multiagent system (MAS) to study the informational and dynamic behaviors of complex systems. Agents possess three basic abilities, i.e., reactivity, premotivation, and social behavior [27]. In addition, agents possess human characteristics, e.g., knowledge, beliefs, debts, and plans. Given the capabilities and characteristics of agents, the most widely used agent architecture is the belief-desire-intention (BDI) software model. Fig. 1 illustrates the generic BDI model, and concepts related to agent construction are defined in Table 1 [28]. An agent can simulate the behaviors of negotiation participants and perform negotiations according to relevant protocols and frameworks to automate the negotiation process.
Agent negotiation
In multi-agent system, negotiation between people evolves into a negotiation between agents. Accordingly, how to conduct reasonable and effective negotiations among multi-agents has become an important problem that needs to be solved. Therefore, agent negotiation is the key link for multi-agent cooperation. Along with the development of the internet and telecommunication technology, more terminal systems can be treated as autonomous agents, which cannot be dominated by any central node. For example, in an e-commerce system, buyers and sellers are autonomous agents [29]. Moreover, doctors and patients can also be treated as autonomous agents in the internet plus medicine. In such systems, negotiation is the predominant form of interaction. Under this background, autonomous agents are increasingly required to operate in open and distributed systems comprising multiple problem solvers with competing objectives [30].
Studies in economics and artificial intelligence fields have constructed the theoretical basis of autonomous negotiation. In agent negotiation, how to find a better offer and a count-offer to meet users’ requirements or maximize users’ benefits is the most important behavior. Common approaches to solving this problem include game theory, heuristics methods, and argumentation-based negotiation. Computers make concrete the notion of strategy through behavior programming that plays the same central role in game theory. Therefore, Rosenschein first applied game theory to solve agent negotiation problems [31]. Kraus et al. [32] integrated game theory, economic techniques, and heuristic methods into the agent negotiation process. Monteserin and Amandi integrated argumentation-based negotiation planning into the general planning process. In the argumentation-based negotiation, agents are allowed to exchange some additional information as arguments without further cost, besides the information provided in the proposals [33].
In most situations, agent negotiation has been commonly modeled as a dynamic and distributed decision-making problem. Therefore, agent negotiation technology has been developed with multi-attribute decision-making technologies. Jonker et al. [34] proposed a component-based generic agent architecture for multi-attribute (integrated) negotiation. The framework uses a distributed design where each agent uses the available information about the opponent’s preferences to predict the opponent’s preferences by introducing a “guessing” heuristic to further improve the negotiation results. Hsu et al. [35] proposed an intelligent body-based fuzzy constrained directed negotiation mechanism for the distributed job shop scheduling problem. The concept of the fuzzy membership function is introduced to represent the imprecise preferences of task start times of job and resource agents. This increased information sharing accelerates convergence and enables global consistency by iterative exchanging offers and counteroffers. These models are essential in improving agent negotiation efficiency and effectiveness.
Agent negotiation has been widely used in service-level agreements (SLAs) [36, 37] and e-commerce. Regarding research content, studies on agent negotiation mainly focus on negotiation framework, negotiation or conflict resolution models, and negotiation strategies to seek a satisfactory solution. For example, Rajavel et al. [38] proposed an automated dynamic negotiation framework (ADSLANF) applied to SLA that introduces a bulk negotiation behavioral learning approach based on reinforcement learning techniques to optimize the negotiation strategy. Cao et al. [39] developed a theoretical model and algorithm for multi-strategy selection based on time-dependent and behavior-dependent strategies applied to e-commerce. Therefore, it is critical for human-machine negotiation to achieve better online negotiation results. Li et al. [40] proposed a genetic algorithm-based negotiation strategy that introduces a genetic algorithm to investigate the preferences and utility functions of the adversaries to achieve a win-win situation for the customer and supplier in incomplete information. Additionally, negotiation or conflict resolution models have been extensively studied in decision analysis. For example, Xiao et al. [41] proposed an optimization-based consensus model with bounded confidence. Zhang et al. [42] proposed a social trust-driven consensus-reaching model. These methods ensure their satisfaction and success rate through negotiation strategies and conflict resolution through behavioral learning, modified assessments, or concessions. Furthermore, these methods increase the utility value and success rate of the negotiating parties and optimize the performance in terms of negotiation rounds, total negotiation time, and communication overhead. However, there is room for improvement in these methods in terms of finding optimal and prioritizing feasible solutions.
With the development of machine-learning technology, some researchers have tried to make the agent learn from the opponent to improve the negotiation efficiency and convergence speed [43, 44] . Currently, the main techniques for learning opponent information include Bayesian learning [20,21,22], nonlinear regression [45], kernel density estimation [46]and artificial neural networks [47]. Among them, Bayesian learning and nonlinear regression are usually applied as online learning techniques. They do not require a training phase to produce reasonable estimates, and their estimates can be improved incrementally during negotiation. In contrast, kernel density estimation and artificial neural networks, which typically require a training phase, are usually applied where negotiation history exists.
Thus, none of the above methods can be applied to SDM since SDM requires that the preferences of both parties are ambiguous and uncertain and that there are connections and constraints between the two agents and between the topics. Currently, agents are used in SDM to build cognitive agents to negotiate instead of patients for training human trainees (doctors). The aim is to enable doctors to learn to analyze the consequences of their own and each other’s actions to make decisions and improve their SDM skills [48]. However, there are no relevant literature studies providing solution recommendations for supporting SDM. Our research topic is based on this consideration to study a generic negotiation framework applicable to SDM and provide suggested solutions for reference when reaching an agreement. Given that SDM may not have a negotiation history, we chose to use Bayesian learning technology to learn the opponent preference model.
Negotiation model for SDM
Here, we introduce the SDM scenario and problem formulation (“SDM scenario and problem formulation” section). Then, in “Construction of opponent preference model” section, we describe how Bayesian learning techniques are employed to model the opponent’s preferences and improve both agents’ knowledge of the opponent during the negotiation process. Opponent preferences model makes good use of limited information and improves the quality and efficiency of the acquired decisions. Finally, the negotiation model for SDM is described in “Negotiation model” section.
SDM scenario and problem formulation
SDM scenario
In an actual clinical setting of SDM, doctor and patient both actively involved in the shared decision-making process, such as problem definition, options presentation, pros/cons discussion, preference/value expiation on proper level and choice making. Much information is shared during the process. But still more cannot be shared or can’t be expressed accurately between doctors and patients due to realistic limitations from both healthcare provider and patients, e.g., time, medical literacy, privacy consideration, and even benefit sometime. Thus, we assume that the SDM negotiation is in an incomplete and fuzzy information environment. In brief, both parties do not have complete information about their opponents and possess imprecise knowledge.
An appropriate SDM scenario exists when a doctor and a patient meet to select a special treatment following SDM processes. In this case, there are two independent participants, i.e., the doctor and the patient. The doctor and patient possess their own beliefs, desires, and intentions [28] (as mentioned by agent BDI model) on multiple issues, and they have enough autonomy to communicate with others and manage their own behaviors. These issues on clinical scenario may involve cost, side-effect, recovery period, treatment effectiveness, and so on [49]. Besides that, they are subject to the influence/constraints of other participants and environment. Based on above, both of them intended to achieve an agreement on the treatment plan with as higher satisfaction as they can.
Thus, an agent negotiation framework is built in Fig. 2. Participants in SDM are modeled as independent, interrelated, and environmentally retrained agents. Here, the communication between the doctor and patient is considered a negotiation between agents. In addition, the varied beliefs and desire expressed as independent and unique preference functions on multiple issues, while aggregate satisfaction for treatment options is considered as individual utility. To support SDM activities, many agent activities will be built latterly in agents. As a result, the DA and PA can support the doctors’ and patients’ medical decision-making processes. uncertainty and imprecise information exist in the scenario. As shown in Fig. 3, we express this inaccurate information and the constraints using trapezoidal fuzzy number.
SDM problem definition
The negotiation model of SDM can be abstracted as a triplet (D, P, Q), where D is the DA, P is the PA, and Q is the constraints for the two agents. For example, Optional treatment options and cost will be constrained by medical technology, which is a typical environment constraint. Due to the fuzziness of medical information, the SDM problem is further expressed as a fuzzy constraint satisfaction problem (FCSP) for each agent.
In the FCSP in SDM, the constraint relations between agents determine whether there exists a solution that satisfies all constraints of the FCSP. Thus, the goal of each agent is to identify behavior that satisfies its fuzzy constraints. In addition, the FCSP can be solved by negotiation between the DA and PA, and it is represented by distributed fuzzy constraint networks (DFCN). A DFCN can be satisfied through the assignment of the fuzzy relationship between agents [50]. In this study, the DFCN is defined as follows.
Definition 1. An agent-based multi-agent multi-issue shared decision-making problem can be modeled as a distributed fuzzy constraint network DFCN, which can be defined as a pair of fuzzy contraint network FCN \(N_l=\ \left( U_l,X_l,C_l\right)\) for Agent l, that the Agent l in \(\{DA,PA\}\), where.
-
1.
\(U_l\) is the universe of discourse for the agent l.
-
2.
\(X_l\) is the tuple of non-recurring objects for agent l.
-
3.
\(C_l\) is a set of fuzzy constraints involving both a set of internal fuzzy constraints among the objects in \(X_l\) and a set of external fuzzy constraints between agent l and its opponent agents.
-
4.
\(N_l\) is connected to other \(N_j\) by a set of external fuzzy constraints.
-
5.
U is the universe of discourse for the whole DFCN.
-
6.
\(X=\left( U_{i=1}^nX_i\right)\) is a tuple of all non- nonrecurring objects of an agent in DFCN.
-
7.
\(C=\left( U_{i=1}^nC_i\right)\) is the set of all fuzzy constraints in DFCN.
Different DAs and PAs may have different negotiation objectives, resulting in different negotiation items, the frameworks can handle the situations where negotiation items are different.
An illustration of Definition 1, based on a scenario of childhood asthma treatment negotiation, is presented here. Basic information on childhood asthma treatments has been collected and calculated from experienced doctors and real data analysis, as shown in Table 2. It presents the characteristics of five different types of treatment options, including their cost range, effectiveness range, side effects range, risk index range, and convenience index range. It’s important to note that this data is processed only for explanation and experimental purposes and is not suitable for direct clinical use.
According to Definition 1, the following is an explanation of the variables. The preferences and constraints of the DA and PA are represented by the non-recurring object sets \(X_{DA}\) and \(X_{PA}\), where \(X_{DA}\) and \(X_{PA}\) include cost, effectiveness, side effects, risk, and convenience.
The fuzzy constraint sets \(C_{DA}\) and \(C_{PA}\) represent the expectations and limitations of the DA and PA regarding treatment options. The sets of solutions proposed by the doctor and patient, \(\mathrm {\Pi }_{DA}\) and \(\mathrm {\Pi }_{PA}\), include treatment plans that satisfy their respective fuzzy constraints. The problem set Q encompasses all factors to be considered, such as cost, effectiveness, side effects, risk, and convenience. A feasible solution S is a treatment plan that meets all fuzzy constraints, and the aggregated satisfaction value \(\mathrm {\Psi }_l(S)\) is calculated through a weighted sum formula, representing each agent’s overall satisfaction with the solution.
In practical application, the doctor and patient negotiate based on the treatment options provided in Table 2. For example, the doctor might prefer the "\({En-High Dose ICS/LABA + LTRA}^a\)" option for its strong performance in treatment effectiveness and risk management, while the patient might lean towards the "\({En-High Dose ICS + Sustained-Release THP}^d\)" option, which has advantages in cost and convenience. By taking into account both parties’ preferences and constraints, they may agree on a compromise solution, such as the "\({En-High Dose ICS/LABA + Sustained-Release THP}\)" plan. This option balances treatment effectiveness and risk control with cost and convenience, maximizing satisfaction for both parties.
The aggregated satisfaction value of solution S for the \({l}^{th}\) agent, i.e., \(\mathrm {\Psi }_l(S)\), is defined as follows.
Here, \(M_{l,i}\left( S\right)\) represents the membership degree of solution S for the i-th issue. This value is derived directly from the responses of doctors and patients, using customized fuzzy membership functions for each issue. These functions are created through expert consultation or by interviewing doctors and patients to capture their preferences. For each issue - such as treatment effectiveness, side effects, or cost - a trapezoidal fuzzy membership function is defined based on acceptable value ranges provided by the participants. In addition, n is the number of issues to be negotiated by the DA and PA, and \(w_{l,i}\) is the corresponding weight factor for the i-th issue for the l-th agent. This approach enables an accurate representation of satisfaction levels for each potential solution, ensuring that the aggregate satisfaction function \(\Psi _l(S)\) effectively models the preferences of both agents in the decision-making process.
Construction of opponent preference model
To improve the quality and efficiency of healthcare decisions and make good use of limited information (e.g., the opponent’s counteroffer), we employ Bayesian learning techniques to update the agent’s knowledge about the opponent’s preferences.
Bayesian learning was chosen for its adaptability and precision in managing the uncertainties inherent in SDM, which prior approaches have struggled to address. Traditional methods, like rule-based and deterministic models, often cannot dynamically adjust to the evolving, personalized preferences that emerge in patient-physician interactions. In contrast, Bayesian learning offers a probabilistic framework that continuously updates its understanding of both parties’ preferences in real time, adapting as new information is acquired. This capacity to handle uncertain, shifting preferences makes Bayesian learning particularly effective in achieving personalized and mutually satisfactory treatment plans within the complex and variable context of SDM.
In the following, we define the weight hypothesis and fuzzy membership function hypothesis for each issue of the opponent. Then, we apply Bayesian rules to update the probabilities of these hypotheses during the negotiation to understand the opponent’s preference information.
Hypothesis setting
In SDM, we use the aggregated satisfaction value \(\mathrm {\Psi }\left( b_t\right)\) to measure the agent’s satisfaction with the content of the negotiation. Thus, we employ the aggregate satisfaction function to measure the opponent’s satisfaction of the offer. This is defined by a set of weights \(w_i\) for each of the n issues and the corresponding fuzzy membership function \(f_i(x_i)\).
Here, \(x_i\) represents the value of the i-th issue in bid \(b_t\) at negotiation time t. To ensure that the aggregated satisfaction function \(\mathrm {\Psi }\left( b_t\right)\) is in the range [0,1]. Here, the fuzzy membership function \(f_i\) is assumed to be set in the range [0,1]. Additionally, the weights \(w_i\) are normalized so that their sum equals 1.
To learn the opponent’s aggregated satisfaction function \(\mathrm {\Psi }\left( b_t\right)\), we must learn the opponent’s issue weights \(w_i\) and fuzzy membership function \(f_i(x_i)\). Thus, we assume \(w_i\) and \(f_i(x_i)\) in Eq. (2), respectively. The first assumption involves the issue weights \(w_i\), where we can first set a set of all possible weights matrix \(H^w\), and then we calculate the real number and relate it to the weight assumption \(h_j^w\in H^w\) using the following linear function. Specifically, Eq. (3) describes the relationship between the assumed weight values of the agent and the issue weights, as follows:
Here, \(r_i^j\) is the ranking order of weights \(w_i\) in hypothesis \(h_j\), and n is the total number of negotiation issues. In a multi-issue negotiation model, the weights of issues are crucial to the agent’s satisfaction. The logic of Eq. (3) lies in gradually updating the hypothesis of the opponent’s weights based on this ranking assumption, and dynamically adjusting through Bayesian learning to adapt to the opponent’s negotiation changes. This helps construct a preference model of the opponent agent.
The second assumption involves the hypothesis about the opponent’s fuzzy membership function. Here, the preferences of the DA and PA can be considered as a membership function; thus, we can assign a membership degree to each hypothesis in the hypothesis space and model the fuzzy membership function as a probability distribution. As shown in Figs. 4 and 5, we approximate the shape of the true fuzzy membership function for negotiation issue i by associating the various fuzzy membership function hypotheses with their corresponding probabilities (\({\bar{h}}_i^f\)).
Typically, both agents must make more or fewer concessions to reach an agreement during the negotiation process. Thus, we assume that the agents employ a concession-based time-dependent strategy [51]. According to this strategy, the agents begin with a bid with the highest aggregated satisfaction value and move towards their reservation value as they approach the negotiation deadline. Here, we employ a monotonically decreasing linear function to static estimate the satisfaction value of the counteroffer \(\mathrm {\hat{\Psi }}^\prime \left( b_t\right) =<span class='convertEndash'><span class='convertEndash'>1-0.05</span></span>\cdot t\), and compute the conditional probability \(P\left( b_t\mid h_j\right)\) as follows.
Here, \(\Psi \left( b_t\mid h_j\right)\) is the satisfaction value of the counterparty also bidding \(b_t\) given hypothesis \(h_j\), and \(\Psi ^{\prime }\left( b_t\right)\) is the satisfaction value of the dynamic estimated opponent’s next bid. Note that function c(t) is the negotiated concession strategy assumed to be used by the opponent.
The expected value of the shape of the fuzzy membership function \(\bar{h}_i^f\) is calculated as follows.
Here, \(h_{i,j}^f\) is the fuzzy membership function for issue i under hypothesis j, where \(P\left(h_{i,j}^f\right)\) represents the probability of hypothesis j. The term \(h_{i,j}^f(b_t)\) generates the satisfaction values for the bids \(b_t\), and the summation considers all m possible hypotheses. Eq. (6) therefore calculates the expected value of the fuzzy membership function for issue i at bid \(b_t\).
We use \(h_{i,j}^w\) to denote the assumptions about the value of the weights of issue i according to hypothesis j, and the value of the associated weights, i.e., \(h_{1,1}^w=0,h_{1,2}^w=0.1,h_{1,3}^w=0.2,\ldots\). The expected value of the weights is calculated as follows.
Finally, based on the bid \(b_t\) of aggregated satisfaction of the opponent expectation is calculated as :
Note that for each problem, you need to normalize the weights and the probability distribution over the evaluation function:
Updating the hypothetical probabilities
The opponent’s initial bid is their maximum aggregated satisfaction; thus, this bid does not provide information about the weight of the problem. Therefore, the first bid only updates the probability distribution assumed by the shape of the fuzzy membership function of the adversary. Here, the probability distribution assumed for the weight is updated only after the agent receives more than one bid from the adversary.
With this in mind, we can update the assumptions for issue k using the expected values of the weight assumptions for the remaining problems defined by the opponent model and the expected values of the shape assumptions of the fuzzy membership function. Here, we must introduce bid \(b_t\) the partially expected utility of \({\bar{u}}_{<-k>}\left( b_t\right)\), which is defined as follows.
We the update the probability of evaluating the hypotheses on function shapes according to the Bayesian rule as follows.
Here, \(\bar{h}_i^w\) is the expected value of the weights of issue k and \(j\!=\!1, \!\ldots \!, m\).
The probabilities of the hypotheses associated with the weights of issue k can be updated as follows.
Here, \(j=1, \ldots , m\).
Negotiation model
The fuzzy constraint agent-based negotiation model includes the behavioral framework of the DA and PA and the negotiation protocol that must be followed by the negotiators. Thus, we initially developed the behavioral framework of DA and PA based on the BDI model, and then we determined the negotiation protocol. Internal components of the negotiation framework are integrated into the agent architecture. The internal components are negotiation strategy, opponent model, utility model and solution generators. Here, the negotiation protocol defines the negotiation process by exchanging offers and counteroffers until reaching an agreement or the negotiation is terminated.
Building agent-based SDM model
First, we generate the behavioral models of the DA and PA based on the BDI architecture (“Agent technology” section). Table 3 shows an example describing the agent’s behavior.
Then, according to the behavioral model of DA/PA, we construct an agent-based negotiation model to simulate the SDM process between the doctor and patient, as shown in Fig. 6. Here, each agent involves four processing modules.
(1) Evaluation module
The evaluation module evaluates the opponent’s offer, calculates the aggregate satisfaction value (ASV) with the opponent’s agent offer, and determines whether to continue the negotiations.
(2) Calculate the concession value module
If the negotiation is to continue, the opponent preference model and concession value calculation modules are combined to calculate the concession value.
(3) Feasible solutions generation module
A set of feasible solutions is generated by the feasible solutions generation module based on the concession values and the opponent’s preferences.
(4) New offer generation module
In the new offer generation module, the best solution is selected from set of feasible solutions and sent to the opponent. The agents repeatedly exchange offers and counteroffers during the negotiation until the termination conditions are satisfied (e.g., consensus or failure).
Negotiation process
The fuzzy constraint-directed approach is an effective method to realize agent negotiation [52]. Here the DA and PA abide by the given negotiation protocol to exchange offers and counteroffers to solve their respective FCSP problems during the negotiation. The negotiation steps include solution evaluation, concession calculation, feasible solution generation, offer generation, and negotiation termination.
Step 1: Evaluation of the solution
Here, the l-th agent uses the ASV to assess satisfaction with counteroffer B to determine whether to reach an agreement or make a concession. Based on Definition 1 (“Negotiation model for SDM” section), the agent’s ASV is calculated as follows.
Here, \(M_{l,i}\left( B\right)\) denotes the fuzzy membership function of the \({i}^{th}\) issue for the \(l^{th}\) agent regarding counteroffer B. This represents the preference of the DA or PA for the \({i}^{th}\) negotiation issue. n is the number of negotiation issues, and \(w_i\) is the weight of the \({i}^{th}\) negotiation issue.
Step 2: Concession value calculation
When the agent receives a counteroffer B, it determines whether the agent will propose an alternative solution to the current satisfaction constraint level by determining the behavior state of agents. The behavior state of an agent is determined based on the currently negotiated concession value. Here, as described in “SDM scenario and problem formulation” section, the agents can construct an opponent preference model based on Bayesian learning to evaluate the opponent’s response, concession, internal, and environmental states to determine the concession value. These four states represent the intention of the opponent, agent concession, viewpoint, and environmental constraints.
a) Response status.
The opponent’s response status O indicates the degree of difference between the previous offer A and the most recently received counteroffer B, which is defined as follows.
Here, \(A_0\) is the initial offer, \(B_0\) is the opponent’s first counteroffer, and G(A, B) represents offer A and counteroffer B on the issue \(I_i\in X\) distance metric on the issue. G(A, B) is calculated as follows.
Here, L is the Euclidean distance between the two fuzzy sets \(A_i\) and \(B_i\) are the offer A and counteroffer B for issue \(I_i\in I\) of the probability distribution.
b) Concession status.
The opponent’s concession status D is determined by the concession value \(\gamma\). Here, counteroffer B and the aggregated satisfaction of the opponent’s first counteroffer \(B_0\) determine size of the concession value \(\gamma\). The aggregated satisfaction of the counteroffer is obtained from Eq. (8) in “SDM scenario and problem formulation” section, and the concession value \(\gamma\) is defined as follows.
c) Internal status.
The internal state of agent \(\delta\) is related to the satisfaction \(\delta\) of the most recent offer and the tightness \(\delta\) of an alternative set of solutions. Satisfaction \(\rho\) and tightness \(\delta\) are calculated as follows.
Here, \(S^*\in \pi\) is the expected solution from the agent’s last round of negotiation, and \(\varepsilon\) is the satisfaction threshold.
The satisfaction threshold, denoted as \(\varepsilon\), is a predefined value that acts as a minimum satisfaction criterion for accepting an offer during negotiation. This threshold is not an estimated value; rather, it is set based on specific criteria relevant to the decision-making context and negotiation strategy.
For both the DA and PA, the initial \(\varepsilon\) is set to 1.0, representing a full satisfaction level. Throughout the negotiation, this value decreases incrementally based on the concession strategy applied by each agent. The threshold adjustment is achieved through Bayesian learning and fuzzy constraints, enabling each agent to gradually lower their \(\varepsilon\) as the negotiation progresses. The specific adjustments are detailed in Eq. (23) below. The selection of an initial \(\varepsilon\) of 1.0 aligns with the ideal satisfaction level in early negotiation rounds, ensuring that the negotiation only begins from positions of high preference. As negotiations proceed, the agents dynamically reduce \(\varepsilon\) in response to counteroffers, maintaining flexibility in reaching a balanced outcome without prematurely settling for a less preferred agreement.
d) Environmental status.
In SDM, the main environmental constraint t for the DA and PA is the time constraint; thus, we use the following function [53] to describe the time constraint.
Here, \(r_{now}\) is the negotiation time for the current round, \(r_{max}\) is the negotiation deadline, parameter t denotes the negotiation time constraint, and \(\alpha\) and \(\beta\) are constants, where \(\beta>1\mathrm {,\ }0\le \alpha \le 1\). The condition \(\beta>1\) ensures that the time constraint function decreases non-linearly, giving more flexibility at the beginning of the negotiation and applying stricter constraints as the deadline approaches. This design aligns with real-world scenarios where agents make larger \(\beta\) can theoretically take large values, in practice, it is bounded to maintain realistic negotiation dynamics and prevent excessive sensitivity to time.
Finally, using the opponent’s response status, concession state, internal state P, and time constraints t, we obtain the agent’s concession value at the time of negotiation \(\mathrm {\Delta \varepsilon }\) as follows.
Here, parameter \(\omega\) determines which negotiation strategy is applied to adjust the size of the concession value, and there are three main negotiation strategies: i) \(\omega <1\) represents a collaborative strategy; ii) \(\omega =1\) represents a win-win strategy; and iii) \(\omega>1\) represents a competitive strategy.
Given the negotiated concession value \(\mathrm {\Delta \varepsilon }\) and behavior state \(\varepsilon\), the agent’s new behavior state \(\varepsilon ^*\) is updated as follows.
Step 3: Feasible solution generation
Given the FCN N, the intention \(\mathrm {\Pi }\) and new behavioral state \(\varepsilon ^*\) are used to obtain a feasible solution. The feasible solution is defined as follows.
When considering counteroffer B and feasible solutions P, the expected solution in the case of \(S^*\) is defined as follows.
Here, H(S, B) is a utility function that evaluates the feasible solution S preferences and the counteroffer B and the feasible solution S of the similarity, which is defined as follows.
Here, \(W_1\) is the preference function of issue i, \(W_2\) is the similarity function measuring the difference between solution S and counteroffer B, and \(\omega _1\) and \(\omega _2\) are the weights associated with the preference and similarity, respectively. \(W_2\) is defined as follows.
Here, \(F_i(S_i)\) is the fuzzy membership function of the agent for issue i in solution S, and \({\bar{h}}_i^f\left( B_i\right)\) is the opponent’s fuzzy membership function for issue i in counteroffer B, which his calculated using Eq. (6) of the Bayesian learning-based opponent model (“SDM scenario and problem formulation” section).
Step 4: Offer generation
Given a feasible solution P and the expected solution \(S^*\), the new offer \(A={A_1^*{,A}_2^*,\ldots ,\ A_i^*,\ldots ,A_N^*}\) generated on a set of issues \(I \in X\) is defined as follows.
The element \(A_k^*\) in set \(A^*\), which represents the expected solution set \(S_k^*\) of the offer for issue i in space X with a boundary specialization possibility distribution, is defined as follows.
Here, \({\bar{\Pi }}_{X_k}=S_k^*\), and \({\bar{\Pi }}_{X_k}\) denotes cylindrical expansion over the space X. \(X_k\) is the object of issue i, and \(N_X\) is the total number of negotiated objects.
Step 5: Termination
The DA and PA exchange offers and counteroffers continuously until the negotiation succeeds or fails. When given a feasible solution P and an opponent’s counteroffer B, if the agent’s ASV assessment of the opponent’s counteroffer is greater than or equal to its satisfaction threshold for the new round, the agent agrees to the opponent’s proposed offer and the negotiation succeeds. Each agent’s assessment of the ASV of their opponent’s counteroffer is calculated as follows.
If the agent’s new satisfaction threshold \(\varepsilon ^*\) is less than 0 or the set of feasible solutions P is empty, the negotiation fails.
In addition, the negotiation is terminated if the current negotiation period exceeds the agreed negotiation time limit.
Negotiation protocol
The negotiation protocol defines the interaction between agents during the negotiation process. The negotiation protocol represents rules that must be followed by all agents. It defines all interactions between agents and determines the order and structure of messages. During the negotiation, the DA and PA negotiate by sending and receiving various types of messages. Table 4 describes the different types of messages and their negotiation results.
Figure 7 shows a sequence graph that illustrates the negotiation process between the DA and PA. First, the DA generates an initial offer based on its maximum ASV and sends it to the PA via an Ask (offer) message. The PA receives the initial offer from the DA and evaluates the ASV of the offer according to Eq. (14), and then calculates the concession value to determine the new satisfaction threshold according to Eqs. (15) to (22). Here, assume that the new satisfaction threshold satisfies the negotiation termination condition. In this case, i.e., the condition of Eq. (28) is satisfied, the PA sends a Reject () message to the DA, and the negotiation fails. Otherwise, the PA generates a set of feasible solutions based on the new satisfaction threshold according to Eqs. (23) to (26) and determines whether the ASV of the opponent’s offer is greater than the new satisfaction threshold. If the ASV is greater than or equal to the new satisfaction threshold, the PA accepts the counterpart’s offer and sends an Accept () message to the counterpart. Otherwise, when a feasible solution exists for the PA, the PA generates a counteroffer according to Eqs. (27) and (28) and informs the opponent via a Tell (counteroffer) message. Here, if the PA has no feasible solution, i.e., the feasible solution P in Eq. (30) is empty, the PA sends an Abort () message to terminate the negotiation, and the negotiation fails. During the negotiation period, the DA and PA continue to exchange offers and counteroffers via negotiation messages until the negotiation succeeds or fails. Otherwise, the negotiation continues until the agreed negotiation time is exceeded, which also results in a failed negotiation.
Performance evaluation
In this section, we discuss numerical experiments conducted to evaluate the agent negotiation model proposed in “Negotiation model for SDM” section. We evaluate the model’s effectiveness and efficiency in solving the SDM problem (SDMP) between a doctor and patient in a healthcare setting. Here, each doctor and patient is represented by an agent (i.e., the DA and PA, respectively), and the negotiation issues depend on treatments selected for children’s asthma. We verified the effectiveness of the proposed BLFCAN model by comparing the performance of different negotiation strategies and different negotiation models.
Experimental settings
The Global Burden of Disease study identified asthma as the most prevalent chronic respiratory disease worldwide, affecting 1–18% of the population in various countries [54,55,56], and bronchial asthma is more common in children than adults [57]. Supposed to control asthma in childhood, the morbidity and mortality of asthma can be effectively reduced. Here, we consider the treatment of asthma in children as a simple example to explain how the proposed BLFCAN model applies to SDM.
In this case, the negotiation issues involved in selecting treatment include cost, effectiveness, side effects, risks, and convenience.
Tables 5 and 6 present the preference data of the DA and PA for various decision factors, such as cost, effectiveness, side effects, risk, and convenience. The data sources and determination methods for these tables are as follows:
-
Issue and Value Range Setting: First, a reasonable value range for each factor (i.e., “issue”) is set based on actual considerations in medical decision-making for doctors and patients. For example, the cost range is set from 0 to 8,000 RMB, reflecting the upper and lower limits of treatment expenses. The ranges for effectiveness, risk, and side effects are set as specific rankings or percentages to standardize these factors.
-
Most Preferred Range Determination: Within each factor’s value range, the most preferred range is defined based on the goals and preferences of doctors and patients. Doctors and patients have different needs for factors such as cost, effectiveness, and side effects. For instance, doctors tend to balance effectiveness and side effects, while patients prioritize low costs and high effectiveness. This preferred range is derived from surveys and references from relevant literature.
-
Minimum and Maximum Preference Values: The minimum and maximum preference values define the acceptable preference range for each factor from the perspectives of doctors and patients. The minimum preference value represents the lowest acceptable level, and the maximum preference value represents the highest acceptable level. These values are set through data analysis of historical choices made by doctors and patients, as well as feedback from expert consultations.
-
Weight Preference Assignment: The weight for each factor represents its importance in the decision-making process. The weights are derived based on expert interviews and surveys, combined with the actual preferences of doctors and patients in decision-making. For example, doctors prioritize effectiveness and side effects, so these factors are assigned higher weights, while patients place greater importance on cost and effectiveness.
The trapezoidal fuzzy membership functions for the DA and PA according to Tables 5 and 6 are shown in Fig. 8. Finally, the weight preference column shows the importance of each issue to the DA and PA.
As shown in Tables 5 and 6, the DA and PA have different preference settings, and they will possess different degrees of satisfaction for a given treatment, which is in line with real-world SDM situations. In this study, we obtained these preference settings by interviewing experienced doctors who have treated children’s asthma and the parents of children with asthma.
These experiments considered one DA and one PA. In addition, the number of negotiation issues was set to five, the initial satisfaction threshold \(\upepsilon\) for both the DA and PA was set to 1.0, and the satisfaction reservation value was set to 0. The maximum number of negotiation rounds was set to 20. Here, if the number of negotiation rounds exceeded 20, the negotiation failed. Note that all experimental results reported in this paper represent the average values obtained over 200 experiments conducted under equal conditions. The following indexes were used to evaluate the experimental results.
ASV: The aggregated satisfaction value of the DA or PA when an agreement was reached.
DASV: The disparate of aggregated satisfaction value between the DA and PA when an agreement was reached.
CASV: The sum of the aggregated satisfaction value of the DA and PA when an agreement was reached.
NR: The average number of negotiation rounds required to reach an agreement.
Example scenario of proposed negotiation model
In this example, assume that the DA starts the negotiation. This negotiation will lead to the following result.
In round one, with no negotiation history, the DA and PA offer their expected offers [Cost: 4.5, Effectiveness: 7.0, Side effects: 0.1, Risk: 0.1, Convenience: 7.0] to each other, as shown in Fig. 9.
In round two, the DA first evaluates the PA’s counteroffer according to Eq. (15). Here, the DA’s feasibility solution satisfaction threshold in round one was set to 1.
According to Eqs. (15) to (22), the new satisfaction threshold in round two was 0.975. The DA and PA update their new feasible solution sets on all issues and the expected solutions according to Eqs. (23) to (26), as shown in Fig. 10. The DA’s new feasible solution set update process in terms of the Cost factor is shown in Fig. 11.
Next, the DA generates new offers on the feasible solution set and the expected solution according to Eqs. (27) and (28).
Finally, the DA compares according to Eq. (29). Here, the PA’s counteroffer does not meet their new satisfaction threshold value (0.975); thus, the DA continues the negotiation and sends a new offer to the PA.
The subsequent negotiation rounds were calculated using the same methods described for round two. The results are summarized in Tables 7 and 8.
Up to round seven, the DA evaluated determined that their satisfaction value for the offer from the PA was 0.561 greater than the new threshold of 0.373; thus, an agreement was reached, and the results of this negotiation were [Cost: 4.3, Effectiveness: 9.0, Side effects: 0.05, Risk: 0.08, Convenience: 9.0].
Performance comparisons of different BLFCAN strategies
To evaluate the impact of the cooperative, win-win, and competitive negotiation strategies on the effectiveness of the proposed model, we first adjusted the parameters settings, including \(\omega _1\) and \(\omega _2\) (Eq. (26)), to vary the negotiation strategies of the DA and PA. In these experiments, the strategy parameters were set as follows: (1) \(\omega _1=0.2\) and \(\omega _2=1.0\): the collaborative strategy; (2) \(\omega _1{=\omega }_2=1.0\): the win-win strategy; and (3) \(\omega _1=1.8\) and \(\omega _2=1.0\): the competitive strategy.
Here, we conducted 200 experiment iterations on five negotiation topics with three different negotiation strategies for the DA and PA (“Experimental settings” section). The negotiation results are summarized in Table 9. As can be seen, the DA and PA adopted the collaborative, win-win, and competitive strategies, respectively. The experimental results demonstrate that a good average solution was obtained when the DA and PA adopted the collaborative or win-win strategies. In addition, the CASV and individual ASV were all greater than 55%.
When the DA selected the collaborative strategy in Case 1, the competitive strategy was an inferior solution for the PA. It not only reduced the CASV (1.315 vs. 1.322 vs.1.295, the average ASV for the DA (0.625 vs. 0.596 vs. 0.570), and the average ASV for PA (0.690 vs. 0.726 vs. 0.725) but increased the average DASV (0.065 vs. 0.130 vs. 0.155) and the average NR (7.31 vs. 7.51 vs.7.78). Note that similar results were obtained in both Cases 2 and 3.
We found that the collaborative strategy minimized the satisfaction gap between the DA and PA, and reduced the negotiation time. We also found that the win-win strategy increased patient satisfaction; however, it reduced doctor satisfaction and increased the negotiation gap. In addition, the negotiation time increased. In our analysis of this phenomenon, the win-win strategy was more stringent than collaborative in terms of concessions, which caused them to spend more time to reach an agreement, and the satisfaction of the DA and PA decreased over time. As a result, the negotiation results obtained with the competitive strategy were worse than those of other two strategies.
We also found that when the DA or PA adopted the collaborative strategy, the average DASV between them was lower than the other strategies (reduced by 1–12.7%), and the average CASV was greater than the average CASV with the other strategies.
We also investigated the convergence rate of the negotiation by varying the parameter \(\omega\) for the concession value in Eq. (21). Figure 12 shows the number of negotiation rounds and the size of the concession value for the collaborative (\(\omega\) = 0.6–0.8), win-win (\(\omega\) = 1.0), and competitive (\(\omega\) = 1.2–1.4) strategies. When \(\omega\) = 0.6 or 0.8, the concession value per round was greater than that of the other strategies, which resulted in fewer negotiation rounds than the other strategies. As the value of \(\omega\) increased gradually to 1.2 or 1.4, i.e., when the model adopted the competitive strategy, the concession value per round became significantly smaller, and the number of negotiation rounds increased.
Performance comparison of different negotiation models
Here, we compare the proposed BLFCAN model when the agents only used Bayesian techniques to verify the effectiveness of the model. The agents’ negotiation issues and preferences were set as defined in “Experimental settings” section. Here, we performed 200 separate iterations of the experiment. The negotiation results are summarized in Table 10.
The results shown in Table 10 demonstrate that the proposed model acquired better negotiation results than the compared agent under different negotiation strategies. For example, our agents exhibited significant increases in individual ASV than the compared agent, which resulted in improvements of at least 26.4% in terms of average CASV. In addition, the average DASV between the DA and PA was significantly smaller, showing a reduction of 4.5–16.9%. This was caused by the fact that the satisfaction of the DA and PA decreased as the number of negotiation rounds (i.e., the negotiation time) increased. Overall, we found that the proposed model achieved better negotiation results in less time (within approximately 7.31–8.19 rounds).
These results suggest that the proposed BLFCAN model can search the possible solution space as much as possible and find a better solution for the SDMP. In addition, the negotiation proposed took less than 60 s to execute, which is suitable for some clinical SDM environments where the communication process between doctors and patients is complex and trivial, which can save the time and cost required by doctors and patients in the communication. In addition, the proposed model considers different negotiation strategies, which can effectively simulate different negotiations between doctors and patients; thus, it can be used to obtain satisfactory results for both doctors and patients
Conclusion
In this paper, we have proposed agent-based multi-issue negotiation model that employs Bayesian learning and fuzzy constraints to solve the SDM problem between doctors and patients. Unlike traditional agent-based negotiation systems, the proposed BLFCAN model considers the constraints between doctor and patient, as well as the negotiation problems. In addition, the proposed model attempts to understand each agent’s preferences, avoid potential conflicts, and reduce the satisfaction gap between the negotiating parties using iterative negotiation processes and opponent modeling techniques to reach a satisfactory negotiation more effectively than existing techniques.
The proposed model was evaluated experimentally on a set of negotiation issues using different strategies for doctor and patient agents. In terms of individual satisfaction, we found that the proposed model achieved 55.4–64.2% satisfaction for doctors and 69–74.5% satisfaction for patients. In addition, the satisfaction gap between the doctor and patient agents was controlled at 5.8–18.5%. In terms of aggregate welfare, the proposed model achieved an aggregate satisfaction rate of 129.3–134.2%. The proposed BLFCAN model also increase the overall satisfaction by 26.5–29% in fewer rounds compared to other negotiation models. The experimental results demonstrated that the proposed BLFCAN model can negotiate different strategies for various medical scenarios and outperform existing negotiation models. In addition, it can effectively reduce communication time between doctors and patients, reduce the influence of emotions and biases on decision making, and complement the clinical application of SDM.
Although the proposed model can be applied to SDM, it is not fully applicable to all healthcare scenarios. Thus, in future, we will consider extending the model to more complex healthcare SDM scenarios, e.g., multiagent systems comprising multiple DAs and PAs. In addition, we could consider employing adaptive strategies, where flexibility in terms of changing strategies during the negotiation process is realistic. Finally, we could also investigate whether the proposed model can indirectly improve patient compliance, improve disease outcomes, and address SDMP in the context of other diseases.
Data availability
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
References
Bomhof-Roordink H, Gärtner FR, Stiggelbout AM, Pieterse AH. Key components of shared decision making models: a systematic review. BMJ Open. 2019;9(12):e031763. https://doi.org/10.1136/bmjopen-2019-031763.
Charles C, Gafni A, Whelan T. Decision-making in the physician–patient encounter: revisiting the shared treatment decision-making model. Soc Sci Med. 1999;49(5):651–61. https://doi.org/10.1016/S0277-9536(99)00145-8.
Kioko PM, Requena Meana P. Prudence in Shared Decision-Making: The Missing Link between the “Technically Correct” and the “Morally Good” in Medical Decision-Making. J Med Philos Forum Bioeth Philos Med. 2021;46(1):17–36. https://doi.org/10.1093/jmp/jhaa032.
Drake RE, Cimpean D, Torrey WC. Shared decision making in mental health: prospects for personalized medicine. Dialogues Clin Neurosci. 2009;11(4):455–63. https://doi.org/10.31887/DCNS.2009.11.4/redrake.
Pieterse AH, Stiggelbout AM, Montori VM. Shared Decision Making and the Importance of Time. JAMA. 2019;322(1):25–6. https://doi.org/10.1001/jama.2019.3785.
Frosch DL, Kaplan RM. Shared decision making in clinical medicine: past research and future directions. Am J Prev Med. 1999;17(4):285–94. https://doi.org/10.1016/S0749-3797(99)00097-5.
Strum S. Consultation and patient information on the Internet: the patients’ forum. Br J Urol. 1997;80 Suppl 3:22–6.
Daraz L, Morrow AS, Ponce OJ, Beuschel B, Farah MH, Katabi A, et al. Can Patients Trust Online Health Information? A Meta-narrative Systematic Review Addressing the Quality of Health Information on the Internet. J Gen Intern Med. 2019;34(9):1884–91. https://doi.org/10.1007/s11606-019-05109-0.
Agoritsas T, Heen AF, Brandt L, Alonso-Coello P, Kristiansen A, Akl EA, et al. Decision aids that really promote shared decision making: the pace quickens. Br Med J. 2015;350:g7624. https://doi.org/10.1136/bmj.g7624.
Papageorgiou D, Farine DR. Shared decision-making allows subordinates to lead when dominants monopolize resources. Sci Adv. 2020;6(48):eaba5881.https://doi.org/10.1126/sciadv.aba5881.
Kasper JF, Mulley AG, Wennberg JE. Developing shared decision-making programs to improve the quality of health care. QRB Qual Rev Bull. 1992;18(6):183–90. https://doi.org/10.1016/s0097-5990(16)30531-0.
Vaisson G, Provencher T, Dugas M, Trottier Mv, Chipenda Dansokho S, Colquhoun H, et al. User Involvement in the Design and Development of Patient Decision Aids and Other Personal Health Tools: A Systematic Review. Med Dec Making. 2021;41(3):261–74. https://doi.org/10.1177/0272989X20984134.
Poprzeczny AJ, Stocking K, Showell M, Duffy JMN. Patient Decision Aids to Facilitate Shared Decision Making in Obstetrics and Gynecology: A Systematic Review and Meta-analysis. Obstet Gynecol. 2020;135(2):444–51.
Goossensen A, Zijlstra P, Koopmanschap M. Measuring shared decision making processes in psychiatry: Skills versus patient satisfaction. Patient Educ Couns. 2007;67(1):50–6. https://doi.org/10.1016/j.pec.2007.01.017.
Faiman B, Tariman JD. Shared Decision Making: Improving Patient Outcomes by Understanding the Benefits of and Barriers to Effective Communication. Clin J Oncol Nurs. 2019;23(5):540–2.
Joseph-Williams N, Abhyankar P, Boland L, Bravo P, Brenner AT, Brodney S, et al. What Works in Implementing Patient Decision Aids in Routine Clinical Settings? A Rapid Realist Review and Update from the International Patient Decision Aid Standards Collaboration. Med Dec Making. 2020;41(7):907–37. https://doi.org/10.1177/0272989X20978208.
Jennings NR, Faratin P, Lomuscio AR, Parsons S, Wooldridge MJ, Sierra C. Automated Negotiation: Prospects. Methods and Challenges Group Decis Negot. 2001;10(2):199–215. https://doi.org/10.1023/A:1008746126376.
Kowalczyk R, Bui V. On Fuzzy e-Negotiation Agents: autonomous negotiation with incomplete and imprecise information. In: Proceedings 11th International Workshop on Database and Expert Systems Applications. 2000. pp. 1034–1038. https://doi.org/10.1109/DEXA.2000.875154.
Dubois D, Fargier H, Fortemps P. Fuzzy scheduling: Modelling flexible constraints vs. coping with incomplete knowledge. Eur J Oper Res. 2003;147(2):231–52.
Gwak J, Sim KM. Bayesian learning based negotiation agents for supporting negotiation with incomplete information. In: World Congress on Engineering 2012. July 4-6, 2012. London, UK.. vol. 2188. International Association of Engineers; 2010. pp. 163–168.
Mohammadi-Ashnani F, Movahedi Z, Fouladi-Ghaleh K. Using Bayesian Learning for Opponent Modeling in Multiagent Automated Negotiation. In: Proceedings of the 5th International Conference on Web Research, ICWR; 2019.
Yi Z, Xin-gang Z, Yu-zhuo Z. Bargaining strategies in bilateral electricity trading based on fuzzy Bayesian learning. Int J Electr Power Energy Syst. 2021;129:106856. https://doi.org/10.1016/j.ijepes.2021.106856.
Kraepelien M, Svanborg C, Lallerstedt L, Sennerstam V, Lindefors N, Kaldo V. Individually tailored internet treatment in routine care: A feasibility study. Internet Interv. 2019;18:100263. https://doi.org/10.1016/j.invent.2019.100263.
Doherr H, Christalle E, Kriston L, Härter M, Scholl I. Use of the 9-item Shared Decision Making Questionnaire (SDM-Q-9 and SDM-Q-Doc) in intervention studies—A systematic review. PLoS ONE. 2017;12(3):e0173904. https://doi.org/10.1371/journal.pone.0173904.
Scholl I, Koelewijn-van Loon M, Sepucha K, Elwyn G, Légaré F, Härter M, et al. Measurement of shared decision making–a review of instruments. Z Evidenz Fortbild Qualität Gesundheitswesen. 2011;105(4):313–24.
Urman RD, Southerland WA, Shapiro FE, Joshi GP. Concepts for the Development of Anesthesia-Related Patient Decision Aids. Anesth Analg. 2019;128(5):1030–35.
Wooldridge M, Jennings NR. Intelligent agents: theory and practice. Knowledge Eng Rev. 1995;10(2):115–52. https://doi.org/10.1017/S0269888900008122.
De Silva L, Meneguzzi FR, Logan B. BDI agent architectures: a survey. In: Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI), 2020, Japão.; 2020. pp. 4914–4921.
Rajavel R, Ravichandran SK, Nagappan P, Ramasubramanian Gobichettipalayam K. Cloud service negotiation framework for real-time E-commerce application using game theory decision system. J Intell Fuzzy Syst. 2021;41:5617–28. https://doi.org/10.3233/JIFS-189882.
Vulkan N, Jennings NR. Efficient mechanisms for the supply of services in multi-agent environments. Decision Support Systems. 2000;28(1-2):5-19.
Rosenschein JS. Rational interaction: cooperation among intelligent agents. Stanford: Stanford Univ; 1986. https://www.osti.gov/biblio/5310977.
Kraus S. Strategic negotiation in multiagent environments. MIT Press; 2001.
Monteserin A, Amandi A. Argumentation–based negotiation planning for autonomous agents. Decis Support Syst. 2011;51(3):532–48. https://doi.org/10.1016/j.dss.2011.02.016.
Jonker CM, Robu V, Treur J. An agent architecture for multi-attribute negotiation using incomplete preference information. Auton Agent Multi-Agent Syst. 2007;15(2):221–52. https://doi.org/10.1007/s10458-006-9009-y.
Hsu CY, Kao BR, Ho VL, Lai KR. Agent-based fuzzy constraint-directed negotiation mechanism for distributed job shop scheduling. Eng Appl Artif Intell. 2016;53:140–54. https://doi.org/10.1016/j.engappai.2016.04.005.
Rajavel R, Thangarathanam M. Agent-based automated dynamic SLA negotiation framework in the cloud using the stochastic optimization approach. Appl Soft Comput. 2021;101:107040. https://doi.org/10.1016/j.asoc.2020.107040.
Rajavel R, Ravichandran SK, Kanagachidambaresan GR. Agent-based cloud service negotiation architecture using similarity grouping approach. Int J Wavelets Multiresolution Inf Process. 2019;18(01):1941015. https://doi.org/10.1142/S0219691319410157.
Rajavel R, Thangarathinam M. ADSLANF: A negotiation framework for cloud management systems using a bulk negotiation behavioral learning approach. Turk J Electr Eng Comput Sci. 2017;25(1):563–90. https://doi.org/10.3906/elk-1403-45.
Cao M, Luo X, Luo X, Dai X. Automated negotiation for e-commerce decision making: A goal deliberated agent architecture for multi-strategy selection. Decis Support Syst. 2015;73:1–14. https://doi.org/10.1016/j.dss.2015.02.012.
Li X, Yu C. A Novel Multi-Agent Negotiation Model for E-Commerce Platform. In: 2018 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS); 2018. pp. 390–393.https://doi.org/10.1109/ICITBS.2018.00105.
Xiao J, Wang X, Zhang H. Exploring the Ordinal Classifications of Failure Modes in the Reliability Management: An Optimization-Based Consensus Model with Bounded Confidences. Group Decis Negot. 2022;31(1):49–80. https://doi.org/10.1007/s10726-021-09756-9.
Zhang H, Wang F, Dong Y, Chiclana F, Herrera-Viedma E. Social Trust Driven Consensus Reaching Model With a Minimum Adjustment Feedback Mechanism Considering Assessments-Modifications Willingness. IEEE Trans Fuzzy Syst. 2022;30(6):2019–31. https://doi.org/10.1109/TFUZZ.2021.3073251.
Markovitch S, Reger R. Learning and Exploiting Relative Weaknesses of Opponent Agents. Auton Agent Multi-Agent Syst. 2005;10(2):103–30. https://doi.org/10.1007/s10458-004-6977-7.
Ren Z, Anumba CJ, Ugwu OO. Negotiation in a multi-agent system for construction claims negotiation. Appl Artif Intell. 2002;16(5):359–94. https://doi.org/10.1080/08839510290030273.
Milliken GA. Nonlinear Regression Analysis and Its Applications. Technometrics. 1990;32(2):219–20. https://doi.org/10.1080/00401706.1990.10484638.
Farag GM, AbdelRahman SES, Bahgat R, Moneim AMA. Towards KDE mining approach for multi-agent negotiation. In: 2010 The 7th International Conference on Informatics and Systems (INFOS),Cairo, Egypt, 2010. pp. 1–7.
Krose B, Smagt P. An introduction to neural networks, University of Ámsterdam. Amesterdam; 1996. p. 29.
Petukhova V, Sharifullaeva F, Klakow D. Modelling Shared Decision Making in Medical Negotiations: Interactive Training with Cognitive Agents. In: Baldoni M, Dastani M, Liao B, Sakurai Y, Zalila Wenkstern R, editors. PRIMA 2019: Principles and Practice of Multi-Agent Systems. Springer International Publishing; 2019. pp. 251–70.
Rivera-Spoljaric K, Halley M, Wilson SR. Shared clinician–patient decision-making about treatment of pediatric asthma: what do we know and how can we use it? Curr Opin Allergy Clin Immunol. 2014;14(2):161–67.
Lai KR, Lin MW, Yu TJ. Learning opponent’s beliefs via fuzzy constraint-directed approach to make effective agent negotiation. Appl Intell. 2010;33(2):232–46. https://doi.org/10.1007/s10489-009-0162-2.
Hindriks K, Tykhonov D. Opponent modelling in automated multi-issue negotiation using bayesian learning. In: Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 1, Richland, SC, 2008. pp. 331–338.
Lai KR, Lin M. Modeling agent negotiation via fuzzy constraints in e-business. Comput Intell. 2004;20(4):624–42.
Zulkernine FH, Martin P. An Adaptive and Intelligent SLA Negotiation System for Web Services. IEEE Trans Serv Comput. 2011;4(1):31–43. https://doi.org/10.1109/TSC.2010.44.
Bateman ED, Hurd SS, Barnes PJ, Bousquet J, Drazen JM, FitzGerald M, et al. Global strategy for asthma management and prevention: GINA executive summary. Eur Respir J. 2008;31(1):143. https://doi.org/10.1183/09031936.00138707.
Collaborators GCRD. Global, regional, and national deaths, prevalence, disability-adjusted life years, and years lived with disability for chronic obstructive pulmonary disease and asthma, 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015. Lancet Respir Med. 2017;5(9):691–706. https://doi.org/10.1016/s2213-2600(17)30293-x.
Huang K, Yang T, Xu J, Yang L, Zhao J, Zhang X, et al. Prevalence, risk factors, and management of asthma in China: a national cross-sectional study. Lancet. 2019;394(10196):407–18.
Refat N, Abdelgahny EAH, Bahaa M, El-Fatah RAER. Pediatric asthma: the age of onset and presenting features in Minia Governorate, Egypt. Egypt J Chest Dis Tuberc. 2021;70:357.
Acknowledgements
We would like to express our gratitude to the Department of Pediatrics in Xiamen Hospital of Traditional Chinese Medicine for providing the research environment.
Funding
This work was supported in part by Xiamen Youth Science and Technology Innovation Project under Grant (No.3502Z20206073), Xiamen Overseas Returnees Program under Grant (No.XM202017206), High Level Talent Project of Xiamen University of Technology under Grant (No.YSK20002R), and Graduate Student Science and Technology Innovation Program Project of Xiamen University of Technology under Grant (No.YKJCX2021056). These funds were used in the design of the study and in the collection, analysis, and interpretation of data, as well as in the writing of the manuscript.
Author information
Authors and Affiliations
Contributions
Xin Chen was responsible for developing the BLFCAN model and execute iterative negotiation experiments, and drafted the initial manuscript.Fei-Ping Hong was responsible for data collection and participate in experimental research. Kai-Biao Lin, Yong Liu and Jiang-Tao Lu were responsible for experimental design and participate in experimental research. Ping Lu offered the relevant literature review and the major idea of the developed models, and finalized the manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
This study was approved by the Medical Ethics Committee of Xiamen Hospital of Traditional Chinese Medicine, China, and the approval number is 2021-K065-01. All methods were performed in accordance with the relevant guidelines and regulations of the institutional and/or national research committee and with the Declarations of Helsinki. In addition, all participants and their parents or legal guardian provided written informed consent after a complete description of the study.
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Chen, X., Liu, Y., Hong, FP. et al. Bayesian learning-based agent negotiation model to support doctor-patient shared decision making. BMC Med Inform Decis Mak 25, 67 (2025). https://doi.org/10.1186/s12911-024-02839-y
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1186/s12911-024-02839-y














