What is Coser?
- 1. Coser (COst-SEnsitive Rough sets) is a software dedicated to rough sets problems, especially those related to cost-sensitive learning.
- 2. Coser has a good graphics user interface (GUI), and in each dialog there is a [Help] button to provide information of parameters, source code file names, and related papers.
- 3. Coser is a green software. You can download it, unzip it, and run it immediately.
- 4. Coser is an open source software written in Java. The source code is well documented, and standard Java help files (in .html format) are generated.
- 5. Coser is an evolving software. We are updating it frequently to include more algorithms. 
Who uses Coser?
- 1. We use Coser to undertake experiments on our algorithms and compare them with algorithms in related works.
- 2. Reviewers of our papers may use Coser to check the effectiveness of our algorithms.
- 3. You can use Coser to implement your algorithms and compare them with ours!
What is the platform requirement of Coser?
- 1. Windows. Coser has not been tested on other operating systems yet. However, since Coser is written in Java, it should be easily migrated.
- 2. JDK or JRE 1.5 or higher. In most cases you have it already.
- 3. Weka. This is because that we use some APIs in Weka.
What is the platform requirement of Coser?
How to repeat our experiments?
- If you are a reviewer or a reader of our papers, please follow the instruction below to obtain our results. Note the numbers are the same as my publication
76. Fan Min, Qinghua Hu and William Zhu, Granular association rules on two universe with four measures (submitted to Information Sciences).
- 76.1 Start coser by double clicking coser.bat
- 76.2 Grarule -> Load MMER -> (choose three arff file, examples are already there) OK
- 76.3 Grarule -> Rule generation -> (settings as indicated in the paper) -> Compute. Rules are given in the dialog, and run time information are given in the console.
75. Fan Min, Qinghua Hu and William Zhu, Feature selection with test cost constraint (submitted to International Journal of Approximate Reasoning).
- 75.1 Start coser by double clicking coser.bat
- 75.2 TCS-DS -> Load TCS-DS -> (choose an arff file, nominal data without missing value) OK
- 75.3 TCS-DS -> Test-cost constraint reduction (exhaustive) -> settings: algorithm = ALL, consistency metric = POS, number of experiments = 100 -> Compute. See "[6]: the time for the execution" for SESRA, SESRA* and Backtrack
- 75.4 TCS-DS -> Constraint reduction (compared with optimal) -> settings: heuristic mode = information entropy, lambda upper bound = 0, lambda lower bound = -3, number of experiments = 1000 -> Compute. Only observe the "Finding optimal factor" and "Average run time."
74. Fan Min and William Zhu, Attribute reduction on data with error ranges and test costs Information Sciences, vol. 211, pp. 48-67, November 2012.
- 74.1 Start coser by double clicking coser.bat
- 74.2 DS -> Load DS -> (choose a file, any data) OK
- 74.3 DS -> Normalization -> OK (please remember the name of the normalized data)
- 74.4 TCS-DS-ER -> Load TCS-DS-ER (specify the normalized filename) OK
- 74.5 TCS-DS-ER -> Lambda weighted reduction (to obtain data for the heuristic algorithm)
- 74.6 TCS-DS-ER -> Time comparison (to obtain data for the backtrack algorithm)
72. Fan Min and William Zhu, A competition strategy to cost-sensitive decision trees, RSKT, pp. 359-368, 2012.
- 72.1 Start coser by double clicking coser.bat
- 72.2 BCS-DS -> Load BCS-DS -> OK
- 72.3 BCS-DS -> CC-DT parameter comparison -> choose different benchmark algorithms -> number of experiments = 1000 -> OK (three different algorithms correspond to two figures and one tables in the paper)
- 72.4 BCS-DS -> CC-DT prune comparison -> number of experiments = 1000 -> OK
65. Fan Min, Huaping He, Yuhua Qian and William Zhu, Test-cost-sensitive attribute reduction, Information Sciences, vol. 181, Issue 22, pp. 4928-4942, November 2011.
- 65.1 Start coser by double clicking coser.bat
- 65.2 TCS-DS -> Load TCS-DS -> (choose an arff file, nominal data without missing value) OK
- 65.3 TCS-DS -> Minimal test cost reduction -> compute
64. Fan Min and William Zhu, Attribute reduction with test cost constraint, Journal of Electronic Science and Technology, vol. 9, no. 2, pp. 97-102, June 2011.
- 64.1 Start coser by double clicking coser.bat
- 64.2 TCS-DS -> Load TCS-DS -> (choose an arff file, nominal data without missing value) OK
- 64.3 TCS-DS -> Test cost constraint reduction -> compute
62. Fan Min and William Zhu, Minimal cost attribute reduction through backtracking, FGIT-DTA/BSBT, CCIS 258, pp. 100-107, 2011.
- 62.1 Start coser by double clicking coser.bat
- 62.2 BCS-DS -> Load BCS-DS -> (choose an arff file, nominal data without missing value, if the number of classes is more than 2, specify the misclassification matrix accordingly)
- 62.3 BCS-DS -> Optimal reducts (backtrack) -> Compute
61. Fan Min and William Zhu, Optimal partial reducts with test cost constraint, RSKT, pp. 57-62, 2011.
- 61.1 Start coser by double clicking coser.bat
- 61.2 TCS-DS -> Load TCS-DS -> (choose an arff file, nominal data without missing value) OK
- 61.3 TCS-DS -> Test cost constraint reduction (exhaustive) -> compute (choose algorithms SESRA and SESRA*)
60. Fan Min and William Zhu, Optimal sub-reducts in the dynamic environment, GrC, pp. 457-462, 2011.
- 60.1 Start coser by double clicking coser.bat
- 60.2 TCS-DS -> Load TCS-DS -> (choose an arff file, nominal data without missing value) OK
- 60.3 TCS-DS -> Test cost constraint reduction (exhaustive) -> compute (choose algorithms BASS and ALL)
58. Hong Zhao and Fan Min, Test-cost-sensitive attribute reduction based on neighborhood Rough set, GrC, pp. 802-806, 2011.
- 58.1 Start coser by double clicking coser.bat
- 58.2 DS -> Load DS -> (choose an arff file, any data) OK
- 58.3 DS -> Normalization -> OK (please remember the name of the normalized data)
- 58.4 TCS-DS-NH -> Load TCS-DS-NH -> (specify the normalized filename) OK
- 58.5 TCS-DS-NH -> Test cost neighborhood reduction
57. Guiying Pan, Fan Min and William Zhu, Test cost constraint reduction with common cost, FGIT, LNCS 7105, pp. 55-63, 2011.
- 57.1 Start coser by double clicking coser.bat
- 57.2 TCS-DS -> Load TCS-DS -> (choose an arff file, nominal data without missing value, the test cost relationship should be "Simple common")
- 57.3 TCS-DS -> Simple common test cost constraint reduction -> Compute
56. Guiying Pan, Fan Min, Zhongmei Zhou, and William Zhu, A genetic algorithm to the minimal test cost reduct problem, GrC, pp. 539-544, 2011.
- 56.1 Start coser by double clicking coser.bat
- 56.2 TCS-DS -> Load TCS-DS -> (choose an arff file, nominal data without missing value) OK
- 56.3 TCS-DS -> Minimal test cost constraint reduction based on GA -> compute
- Copy results from the dialog to Excel and draw the lines immediately!
- Of course, you can change data sets and settings to obtain your own results.
- Attention: If you set a big "number of expereiments" (e.g., 4000), please be patient. You can view the process in the console. In most cases there is a display for every 50 experiements.
- Good Luck!
- This free software is developed by the Lab of Machine Learning, Southwest Petroleum University. If you use this software for you research works, please cite it as:
- Fan Min, William Zhu, Hong Zhao, Guiying Pan, Coser: Coser-senstive rough set models, http://www.fansmale.com/software.html
- For any questions and suggestions, please contact Fan Min minfanphd@163.com.
 Reference
- 1 Fan Min, Hua-Ping He, Yu-Hua Qian, William Zhu, Test-cost-sensitive attribute reduction. Information Sciences 181 (2011) 4928–4942
- 2 Fan Min, William Zhu, Attribute reduction of data with error ranges and test costs. Information Sciences 211 (2012) 48–67
- 3 Hong Zhao, Fan Min, William Zhu, Cost-sensitive feature selection of numeric data with measurement errors. Journal of Applied Mathematics 2013 (2013) 1– 13
- 4 Hong Zhao, Fan Min, William Zhu, Test-cost-sensitive attribute reduction of data with normal distribution measurement errors. Mathematical Problems in Engineering 2013 (2013) 1–12
- 5 Zi-Long Xu, Hong Zhao, Fan Min, William Zhu, Ant colony optimization with three stages for independent test cost attribute reduction. Mathematical Problems in Engineering 2013 (2013) 1–12
- 6 Xu He, Fan Min, William Zhu, Comparison of discretization approaches for granular association rule mining. Canadian Journal of Electrical and Computer Engineering 37(3) (2014) 157–167
- 7 Fan Min, Qing-Hua Hu, William Zhu, Feature selection with test cost constraint,.nternational Journal of Approaximate Reasoning 55(1) (2014) 167–179
- 8 Fan Min, Juan Xu, Semi-greedy heuristics for feature selection with test cost constraints. Granular Computing 1 (2016) 199–211
- 9 Fan Min, Zhi-heng Zhang, Dong Ji, Ant colony optimization with partial-complete searching for attribute reduction. Journal of Computational Science 25 (2018-03) 170-182
I. What is Grale?
- 1. Grale (GRalular Association ruLEs) is a software dedicated granular assocition rule mining and recommender system development.
- 2. Grale has a good graphics user interface (GUI), and in each dialog there is a [Help] button to provide information of parameters, source code file names, and related papers.
- 3. Grale is a green software. You can download it, unzip it, and run it immediately.
- 4. Grale is an open source software written in Java. The source code is well documented, and standard Java help files (in .html format) are generated. 
- 5. Grale is an evolving software. We are updating it frequently to include more algorithms. 
II. Who uses Grale?
- 1. We use Grale to undertake experiments on our algorithms and compare them with algorithms in related works.
- 2. Reviewers of our papers may use Grale to check the effectiveness and efficiency of our algorithms.
- 3. You can use Grale to implement your algorithms and compare them with ours!
III. What is the platform requirement of Grale?
- 1. Windows. Grale has not been tested on other operating systems yet. However, since Grale is written in Java, it should be easily migrated.
- 2. JDK or JRE 1.5 or higher. In most cases you have it already.
- 3. Weka. This is because that we use some APIs in Weka.
IV. How to obtain Grale?
V. How to repeat our experiments?
- If you are a reviewer or a reader of our papers, please follow the instruction below to obtain our results. Note the numbers are the same as my publication
92. Fan Min and William Zhu, Mining top-k granular association rules for recommendation (submitted to AGC 2013).
- 92.1 Start grale by double clicking grale.bat
- 92.2 Implicit preference -> Load MMER (choose three arff file, examples are already there) -> OK
- 92.3 Implicit preference -> Top-k rules recommendation -> (settings as indicated in the paper) -> Compute. Results are given in the dialog, and run time information are given in the console.
91. Fan Min and William Zhu, Cold-start recommendation through granular association rules (submitted to JRS 2013).
- 91.1 Start grale by double clicking grale.bat
- 91.2 Implicit preference -> Load MMER (choose three arff file, examples are already there) -> OK
- 91.3 Implicit preference -> Train and Test -> (settings as indicated in the paper) -> Compute. Results are given in the dialog, and run time information are given in the console.
90. Fan Min, Qinghua Hu, and William Zhu, Granular association rules on two universes with four measures Information Sciences (revising)
- 90.1 Start grale by double clicking grale.bat
- 90.2 Implicit preference -> Load MMER (choose three arff file, examples are already there) -> OK
- 90.3 Implicit preference -> Rule generation -> (settings as indicated in the paper, use Sandwich) -> Compute. Rules are given in the dialog, and run time information are given in the console.
88. Fan Min and William Zhu, Granular association rules for multi-valued data, CCECE 2013.
- 88.1 Start grale by double clicking grale.bat
- 88.2 Implicit preference -> Load MMER (choose three arff file, examples are already there) check or uncheck "{0, 1} attributes viewed as scaled"-> OK
- 88.3 Implicit preference -> Rule generation -> (settings as indicated in the paper, compare Sandwich and Backward) -> Compute. Rules are given in the dialog, and run time information are given in the console. Note: the difference from paper 73 is that we filter out negative granules through checking "{0, 1} attributes viewed as scaled".
73. Fan Min, Qinghua Hu, and William Zhu, Granular association rule mining through parametric rough sets, BI 2012, LNCS 7670, pp. 320-331, 2012
- 73.1 Start grale by double clicking grale.bat
- 72.2 Implicit preference -> Load MMER (choose three arff file, examples are already there) OK
- 72.3 Implicit preference -> Rule generation -> (settings as indicated in the paper, compare Sandwich and Backward) -> Compute. Rules are given in the dialog, and run time information are given in the console.
- Note: the difference from paper 90 is that we now enable "Backward" for any settings of thresholds.
72. Fan Min, Qinghua Hu, and William Zhu, Granular association rules with four subtypes, GrC, pp. 432-347, 2012.
- This paper does not have an experiment part. Please refer to an extension of the paper, namely paper 90.
- For each expeirment, copy results from the dialog to Excel and draw the lines immediately!
- Of course, you can change data sets and settings to obtain your own results.
VI. How to find core code?
- In each dialog for running an algorithm, please click [Help] button, the core code is indicated there.
- Good Luck!
- This free software is developed by the Lab of Machine Learning, Southwest Petroleum University. If you use this software for you research works, please cite it as:
- Fan Min, William Zhu, Xu He, Grale: Granular association rules, http://www.fansmale.com/software.html
- For any questions and suggestions, please contact Fan Min minfanphd@163.com.
 Reference
- 1 Xu He, Fan Min, William Zhu, Parametric rough sets with application to granular association rule mining. Mathematical Problems in Engineering 2013 (2013) 1–13
I. What is Petro?
- 1. Petro (GRalular Association ruLEs) is a software dedicated granular assocition rule mining and recommender system development.
- 2. Petro has a good graphics user interface (GUI), and in each dialog there is a [Help] button to provide information of parameters, source code file names, and related papers.
- 3. Petro is a green software. You can download it, unzip it, and run it immediately.
- 4. Petro is an open source software written in Java. The source code is well documented, and standard Java help files (in .html format) are generated. 
- 5. Petro is an evolving software. We are updating it frequently to include more algorithms. 
II. Who uses Petro?
- 1. We use Petro to undertake experiments on our algorithms and compare them with algorithms in related works.
- 2. Reviewers of our papers may use Petro to check the effectiveness and efficiency of our algorithms.
- 3. You can use Petro to implement your algorithms and compare them with ours!
III. What is the platform requirement of Petro?
- 1. Windows. Petro has not been tested on other operating systems yet. However, since Petro is written in Java, it should be easily migrated.
- 2. JDK or JRE 1.5 or higher. In most cases you have it already.
- 3. Weka. This is because that we use some APIs in Weka.
IV. How to obtain Petro?
V. How to repeat our experiments?
- If you are a reviewer or a reader of our papers, please follow the instruction below to obtain our results. Note the numbers are the same as my publication
92. Fan Min and William Zhu, Mining top-k granular association rules for recommendation (submitted to AGC 2013).
- 92.1 Start Petro by double clicking Petro.bat
- 92.2 Implicit preference -> Load MMER (choose three arff file, examples are already there) -> OK
- 92.3 Implicit preference -> Top-k rules recommendation -> (settings as indicated in the paper) -> Compute. Results are given in the dialog, and run time information are given in the console.
91. Fan Min and William Zhu, Cold-start recommendation through granular association rules (submitted to JRS 2013).
- 91.1 Start Petro by double clicking Petro.bat
- 91.2 Implicit preference -> Load MMER (choose three arff file, examples are already there) -> OK
- 91.3 Implicit preference -> Train and Test -> (settings as indicated in the paper) -> Compute. Results are given in the dialog, and run time information are given in the console.
90. Fan Min, Qinghua Hu, and William Zhu, Granular association rules on two universes with four measures Information Sciences (revising)
- 90.1 Start Petro by double clicking Petro.bat
- 90.2 Implicit preference -> Load MMER (choose three arff file, examples are already there) -> OK
- 90.3 Implicit preference -> Rule generation -> (settings as indicated in the paper, use Sandwich) -> Compute. Rules are given in the dialog, and run time information are given in the console.
88. Fan Min and William Zhu, Granular association rules for multi-valued data, CCECE 2013.
- 88.1 Start Petro by double clicking Petro.bat
- 88.2 Implicit preference -> Load MMER (choose three arff file, examples are already there) check or uncheck "{0, 1} attributes viewed as scaled"-> OK
- 88.3 Implicit preference -> Rule generation -> (settings as indicated in the paper, compare Sandwich and Backward) -> Compute. Rules are given in the dialog, and run time information are given in the console. Note: the difference from paper 73 is that we filter out negative granules through checking "{0, 1} attributes viewed as scaled".
73. Fan Min, Qinghua Hu, and William Zhu, Granular association rule mining through parametric rough sets, BI 2012, LNCS 7670, pp. 320-331, 2012
- 73.1 Start Petro by double clicking Petro.bat
- 72.2 Implicit preference -> Load MMER (choose three arff file, examples are already there) OK
- 72.3 Implicit preference -> Rule generation -> (settings as indicated in the paper, compare Sandwich and Backward) -> Compute. Rules are given in the dialog, and run time information are given in the console.
- Note: the difference from paper 90 is that we now enable "Backward" for any settings of thresholds.
72. Fan Min, Qinghua Hu, and William Zhu, Granular association rules with four subtypes, GrC, pp. 432-347, 2012.
- This paper does not have an experiment part. Please refer to an extension of the paper, namely paper 90.
- For each expeirment, copy results from the dialog to Excel and draw the lines immediately!
- Of course, you can change data sets and settings to obtain your own results.
VI. How to find core code?
- In each dialog for running an algorithm, please click [Help] button, the core code is indicated there.
- Good Luck!
- This free software is developed by the Lab of Machine Learning, Southwest Petroleum University. If you use this software for you research works, please cite it as:
- Fan Min, William Zhu, Xu He, Petro: Granular association rules, http://www.fansmale.com/software.html
- For any questions and suggestions, please contact Fan Min minfanphd@163.com.
I. What is ActiveLearning?
- 1. Active learning is a kind of active algorithm to label, through the study of these labels to form a classifier, and through iterative learning to improve the classifier until the formation of a more mature classifier
- 2. Associate Professor Wang proposed different learning methods with traditional methods, the traditional method is marked randomly selected from a number of learning, and this algorithm is the first clustering sequence according to certain rules according to the price index selection marker.
- 3. Start from the two kinds of clustering, clustering is based on density, density and distance according to the double index for master, and according to the object distance and density sorting selected tagging. By comparing experimental results with other classical studies, experimental results show that in most cases, the algorithm achieved good results.
II. Related source code download
- 1. Active learning related source code download Click download. 
I. What are recommender systems?
- 1. Recommender systems or recommendation systems (platform or engine) are a subclass of information filtering system that seek to predict the 'rating' or 'preference' that user would give to an item.
- 2. Recommender systems have become extremely common in recent years, and are applied in a variety of applications.
- 3. The most popular ones are probably movies, music, news, books, research articles, search queries, social tags, and products in general.However, there are also recommender systems for experts, jokes, restaurants, financial services, live insurances, persons (online dating), and twitter followers.
II. Estimate the magic barrier of recommender systems
- 1. What is the magic barrier of recommender systems?
     The magic barrier refers to a lower bound of prediction error a recommender system can attain. Zhang et al. propose three normal distribution models to estimate the magic barrier, which is induced by user uncertainty.
- 2. 2.The magic barrier of recommender systems related source code download Click download. 
 Reference
- 1 Heng-Ru Zhang, Fan Min, Yan-Xue, et al. Magic barrier estimation models for recommended systems under normal distribution. Applied Intelligence, 2018, 48(12): 4678-4693
I. How to correctly execute the tri-pattern discovery program?
- 1. Install and configure the JAVA development environment.
- 2. Install eclipse platform.
- 3. Import weka.jar as the external jar packet.
- 4. Compile and run Tri-pattern discovery program/SemiWildCard.java (press F11) file in eclipse, the results will be presented in the console window.
II. Download
- 1. The tri-pattern related source code download Click download. 
 Reference
- 1 Fan Min, Zhi-Heng Zhang, Wen-Jie Zhai, Rong-Ping Shen, Frequent pattern discovery with tri-partition alphabets. Information Sciences 507 (2020-01) 715-732
I. What are Triangle multiplying Jaccard (TMJ)?
- 1. TMJ similarity is desired to provie better prediction ability for recommender system.
- 2. The Triangle similarity considers both the length and the angle of rating vectors between them, while the Jaccard similarity considers non co-rating users. Therefore TMJ can take advantages of both Triangle and Jaccard similarities.
- 3. By comparing experimental results with eight state-of-the-art ones on four popular datasets under the leave-one-out scenario, results show that the new measure outperforms all the counterparts in terms of MAE and RSME.
II. Download
- 1. TMJ similarity related source code and datasets download Click download. Click download. 
 Reference
- 1 Shuang-Bo Sun, Zhi-Heng Zhang, Xin-Ling Dong, Heng-Ru Zhang, Tong-Jun Li, Lin Zhang, Fan Min, Integrating Triangle and Jaccard similarities for recommendation. PLOS ONE 12(8) (2017) 1–16
I. How to run these unsupervised feature selection methods?
- 1. Install Eclipse project.
- 2. Import UnsupervisedFS to Eclipse.
- 3. In these packages, data.* store data and some middle results. ufs.cluster.* store these algorithms associated with clustering. ufs.featureselection.* store different feature selection algorithms associated with feature selection algorithms. ufs.general.algorithm store some other algorithms. ufs.general.*.test store different testing methods for feature selection. ufs.utils stores common values and utility methods.
- 4. Run any ufs.general.*.test.Test*.java by Java8 version at least and the clustering accuracy is shown in console.
II. Download
- 1. NLS and NNLS related source code and datasets download Click download.Click download. 
I. What is TSD ?
- 1. In this paper, we propose the two-stage density clustering algorithm, which takes advantage of granular computing to address the aforementioned issues.
- 2. The new algorithm is highly efficient, adaptive to various types of data, and requires minimal parameter setting.
- 3. The first stage uses the two-round-means algorithm to obtain sqrt{n} small blocks, where n is the number of instances.
- 4. This stage decreases the data size directly from n to sqrt{n}.
- 5. The second stage constructs the master tree and obtains the final blocks.
- 6. This stage borrows the structure of CFDP, while the cutoff distance parameter is not required.
- 7. The time complexity of the algorithm is O(mn^(3/2)), which is lower than O(mn^2) for CFDP.
II. Download
- 1. 1. TSD related source code and datasets download Click download.Click download. 
I. What is MBR ?
-
1.  In this paper, we propose an efficient CF algorithm based on a new measure called the M-distance, which is defined as the difference between the average ratings of two items.
- 2. In the initialization stage, we compute the average ratings of items and store them in two vectors, which requires O(m) space.
- 3. In the online prediction stage,To predict p ratings, our algorithm requires O(np) time compared with the O(mnp) time of the cosine-based kNN algorithm.
- 4. Our results show that the new algorithm is significantly faster than the conventional techniques, especially for large datasets, and that its prediction ability is no worse in terms of the mean absolute error and root mean square error.
II. Download
- 1. 1. MBR related source code and datasets download Click download.Click download. 
 Reference
- 1 Mei Zheng, Fan Min, Heng-Ru Zhang, Wen-Bin Chen, Fast recommendations with the M-distance. IEEE Access 4 (2016) 1464–1468
I. What is ALEC ?
- 1. Wang et al. propose the active learning through density clustering algorithm with three new features.
- 2. We design a new importance measure to select representative instances deterministically.
- 3. We employ tri-partition to determine the action to be taken on each instance.
- 4. The new algorithm generally outperforms state-of-the-art active learning algorithms.
- 5. The new algorithm requires only O(n) of space and O(mn2) of time.
II. Download
- 1. ALEC related source code and datasets download Click download.Click download. 
 Reference
- 1 Min Wang, Fan Min, Yan-Xue Wu, Zhi-Heng Zhang, Active learning through density clustering. Expert Systems with Applications 85 (2017) 305–317
I. What is ALTA ?
- 1. In this paper, we propose an effective and adaptive algorithm that will be called active learning through two-stage clustering (ALTA).
- 2. The first stage is data preprocessing using the two-round-clustering algorithm.
- 3. Let n be the number of instances and obtain sqrt{n} small blocks.
- 4. For each block, the closest instance of the center is selected as the sample.
- 5. The second stage is the active learning of sampling instances through density clustering.
- 5. This stage consists of a number of iterations of density clustering, labeling and classification.
- 6. In general, data preprocessing reduces the size of the data and the complexity of the algorithm.
- 7. The combination of distance vector clustering and density clustering makes the algorithm more adaptive.
II. Download
- 1. ALTA related source code and datasets download Click download.Click download. 
I. What is CRC ?
- 1. Liu et al. compared six similarity measures to covering-based neighborhood classifiers.
- 2. 1We analyzed the similarities and differences of these six measures.
- 3. We analyzed the reasons for different results.
- 4. We analyzed the time comlexity of each similarity measure.
- 5. Results show that there is no measure which is applied to every data, but Overlap is a relatively best choice.
- 5. This stage consists of a number of iterations of density clustering, labeling and classification.
II. Download
- 1. CRC realted source code download Click download.Click download. 
I. What is CADU ?
- 1. We proposed the CADU algorithm.
- 2. We proposed the LUD model.
- 3. We proposed the optimization objective for minimizing cost.
- 4. We analyzed the time complexity of our algorithm.
- 5. Results show that our CADU outperforms other state-of-the-arts in terms of clustering accuracy.
II. Download
- 1. CADU realted source code download Click download.Click download. 
 Reference
- 1 Qi Huang, Yuan-Yuan Xu, Yong Chen, Heng-Ru Zhang, Fan Min, An Adaptive Mechanism for Recommendation Algorithm Ensemble. IEEE Access 7 (2019-01) 10331-10342
I. ALSE Highlights
- 1. We define two label error statistics functions and build clustering-based practical statistical models to guide block splitting.
- 2. We propose a center-and-edge instance selection strategy to choose critical instances.
- 3. We design an algorithm called active learning through label error statistical methods (ALSE).
- 4. Results of significance test verify the superiority of ALSE to state-of-the-art algorithms.
II. Download
- 1. ALSE related source code and datasets download Click download.Click download. 
I. What is FPSF
-
1. We propose the first-arrival picking through sliding windows and fuzzy c-means
(FPSF) algorithm.
-
2. We design a range detection technique using
sliding windows on vertical and horizontal directions.
- 3. We apply a vertical sliding window to capture large and early energy abrupt shift in each trace.
-
4. We apply a horizontal window to adjust the neighboring first-arrival intervals determined by the
vertical windows.
-
5. We employ PSO to find the original clustering centers of FCM according to the advantages of PSO
including global optimization and fast convergence.
-
6. We employ FCM to pick first arrivals according to the similarity of the first-arrival energy values
of adjacent traces.
II. Download
- 1. FPSF related source code and datasets download Click download.Click download. 
 Reference
- 1 Lei Gao, Zhen-yun Jiang, Fan Min, First-Arrival Travel Times Picking through Sliding Windows and Fuzzy C-Means. Mathematics 7(3) (2019) 1-13
I. CATS Highlights
-
1. We propose hypothetical distribution models with theoretical method to compute the optimal number of query labels.
-
2. We present clustering-based practical statistical method for the same issue.
-
3. We design the cost-sensitive active learning through statistical methods (CATS) algorithm.
-
4. Results of significance test verify the superiority of CATS to state-of-the-art algorithms.
II. Download
- 1. CATS related source code and datasets download Click download.Click download. 
 Reference
- 1 Xiuyi Jia, Zhao Deng, Fan Min, Dun Liu, Three-way decisions based feature fusion for Chinese irony detection. International Journal of Approximate Reasoning 113 (2019-10) 324-335
I. MSAL Highlights
-
1. In this paper, we propose the active learning through multi-standard optimization (MSAL) algorithm considering informativeness, representativeness, and diversity of instances.
-
2. Informativeness is measured by the soft-max predicted entropy.
-
3. Representativeness is measured by the probability density function obtained by a non-parametric estimation.
-
4. The multiplex of the two is used as an optimization objective to reduce model uncertainty and explore the distribution of unlabeled data.
-
5. Diversity is measured by the difference between the selected critical instances.
-
6. It is used as a constraint to avoid choosing instances that are too similar.
II. Download
- 1. MSAL related source code and datasets download Click download.Click download. 
I. CALS Highlights
-
1. We propose a cost-sensitive active learning problem that considers a new but meaningful classification scenario. It is dedicated to solving complex data (attribute values missing and label scarcity) classification issue.
-
2. We present a cost/benefit optimization method that provides the unified evaluation of attribute values and labels. It considers the interaction of attribute values and label cost/benefit.
-
3. Representativeness is measured by the probability density function obtained by a non-parametric estimation.
II. Download
- 1. MSAL related source code and datasets download Click download.Click download.