Datasets that are big with regard to their volume, variety and velocity are becoming increasingly common. However, limitations in computer processing can often restrict analysis performed on them. Nonuniform subsampling methods are effective in reducing computational loads for massive data. However, the variance of the estimator of nonuniform subsampling methods becomes large when the subsampling probabilities are highly heterogenous. To this end, we develop two new algorithms to improve the estimation method for massive data logistic regression based on a chosen hard threshold value and combining subsamples, respectively. The basic idea of the hard threshold method is to carefully select a threshold value and then replace subsampling probabilities lower than the threshold value with the chosen value itself. The main idea behind the combining subsamples method is to better exploit information in the data without hitting the computation bottleneck by generating many subsamples and then combining estimates constructed from the subsamples. The combining subsamples method obtains the standard error of the parameter estimator without estimating the sandwich matrix, which provides convenience for statistical inference in massive data, and can significantly improve the estimation efficiency. Asymptotic properties of the resultant estimators are established. Simulations and analysis of real data are conducted to assess and showcase the practical performance of the proposed methods.