An example of automatic brushing and shaking sound by Python

  • 2021-11-10 10:06:44
  • OfStack

Preface

It is said that Tik Tok is poisonous, and you can't stop with one brush. It seems that Tik Tok has firmly grasped some people's inner needs. Of course, today is not to discuss Tik Tok. Today, we will learn how to use Python to automatically brush Douyin, and praise and comment on the little brothers and sisters with high value.
Project environment
Language: Python3
Editor: Pycharm
Other tools: 1 mobile phone, 1 data cable, android studio

Realization thought

1. Get a screenshot of the short video of the mobile phone Douyin
2. Call Baidu API to recognize faces
3. Like and comment on eligible videos
Get screenshots of Tik Tok video
Get video screenshots, using adb tool here. The adb tool is Android Debug Bridge (Android Debug Bridge). It is a bridge connecting Android mobile phone and PC. Through adb, simulators and physical devices can be managed and operated, such as installing software, viewing device software and hardware parameters, system upgrade, running shell commands, etc. Here, the screen capture of the mobile phone can be realized by sending the corresponding command through the command line window. If the adb toolkit is not installed, you need to install the adb toolkit first.

Specific implementation code


#  Picture compression ratio 
SIZE_normal = 1.0
SIZE_small = 1.5
SIZE_more_small = 2.0


# adb Mobile phone screenshot 
def get_screen_shot_img():
    #  Screenshot 
    os.system("adb shell /system/bin/screencap -p /sdcard/screenshot.jpg")
    os.system("adb pull /sdcard/screenshot.jpg face.jpg")
    #  Compressed picture 
    img = Image.open("face.jpg").convert('RGB')
    scale = SIZE_small
    w, h = img.size
    img.thumbnail((int(w / scale), int(h / scale)))
    img.save('face.jpg')

Call Baidu API to recognize faces

(1) Enter the face recognition console of Baidu Cloud

https://console.bce.baidu.com/ai/?_=1528192333418 & fromai=1#/ai/face/overview/index

If you don't have a Baidu account, you can quickly register one with your mobile phone number.

(2) Create face recognition applications

After the account login is successful, you need to create an application to formally call Baidu api. After the application is successfully created, you will get API Key and Secret Key corresponding to the application, and use two parameters for interface call operation and related configuration.

Click Create Application in the above figure, and fill in "Application Name" and "Application Description" to create the application (other options can be left untouched, just use the default value)

(3) Get the secret key

After the creation is completed, click "Return to Application List" for the next step. The platform will assign you credentials for this application: API Key, Secret Key. This will be used in the next step 1 to get the Access Token required to call the interface.

(4) After obtaining API Key and Secret Key, the Access Token required for calling the interface is obtained through these two parameters

Specific implementation code


def get_access_token():
    # client_id  Obtained for official website AK ,  client_secret  Obtained for official website SK
    host = 'https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id= "Obtained by official website AK " ' \
           '&client_secret= "Obtained by official website SK "  '
    header_dict = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko',
                   "Content-Type": "application/json"}
    req = request.Request(url=host, headers=header_dict)
    res = request.urlopen(req)
    res = res.read()
    res_json = json.loads(res.decode('utf-8'))
    return res_json["access_token"]

Call Baidu API


'''
 Call Baidu API For face detection 
imgPath Address of the picture 
access_token : Developers token
'''


def get_face_info_from_bai_du(img_path, access_token):
    #  Interface request address 
    request_url = "https://aip.baidubce.com/rest/2.0/face/v3/detect"
    # 2 Open picture file in binary mode 
    f = open(img_path, 'rb')
    #  Picture is converted to base64
    img = base64.b64encode(f.read())
    params = {"face_field": "age,beauty,gender", "image": img, "image_type": 'BASE64', "max_face_num": 5}
    params = urllib.parse.urlencode(params).encode(encoding='utf-8')
    request_url = request_url + "?access_token=" + access_token
    #  Call post Request method 
    face_info = get_info_post_json_data(request_url, params)
    # json String to object 
    face_json = json.loads(face_info)
    print(face_info)
    if face_json['error_msg'] == 'SUCCESS':
        #  If no portrait is found, it will return empty 
        if face_json['result']['face_num'] == 0:
            face_dict = {}
        else:
            #  Extract the desired part and store it in the dictionary 
            result = face_json['result']['face_list'][0]
            gender = result['gender']['type']
            age = str(result['age'])
            beauty = str(result['beauty'])
            face_dict = {"gender": gender, "age": age, "beauty": beauty}
    return face_dict

Note that there are pits here, and tell me about my process in into the pit. After applying for ak and sk, I called api for face recognition. The message returned is: {'error_code': 6, 'error_msg': 'No permission to access data'}. I checked the official document and said that I didn't have the right to obtain user data. What permissions do you need? Do you need to apply for permissions to call this interface? Don't you have ak and sk? All kinds of Baidu, fruitless. Give up. Continue to do it in the evening, and find out the reasons from the official documents this time. I accidentally saw that api was upgraded. Upgraded from v2 to v3. The ak and sk I applied for correspond to v 3. And the interface I called is really v 2. It is embodied in the difference between requesting URL, so I can't find the reason hard. The lessons learned from into the pit's experience are: 1. Look at the official documents carefully; If you can't find bug, put it first, look back, and it will be solved.

The json data returned from a normal call to api is given below


{
    "error_code":0,
    "error_msg":"SUCCESS",
    "log_id":304592828857184421,
    "timestamp":1542885718,
    "cached":0,
    "result":{
        "face_num":1,
        "face_list":[
            {
                "face_token":"9ae54ea1941d2b9d8a7e881f3ae39fe1",
                "location":{
                    "left":374.5,
                    "top":406.94,
                    "width":140,
                    "height":136,
                    "rotation":30
                },
                "face_probability":0.99,
                "angle":{
                    "yaw":-12,
                    "pitch":10.26,
                    "roll":29.76
                },
                "age":21,
                "beauty":53.22,
                "gender":{
                    "type":"female",
                    "probability":1
                }
            }
        ]
    }
}

Face recognition. It returns the value you need according to the parameters passed by the interface you call. The more parameters passed, the more detailed the message returned. I only get the parameters of age, gender and face value here. There is also a difference in the format of the data returned by the v2 and v3 interfaces.
Like and comment on eligible videos
After obtaining the data returned by api, there are 1 judgment. My judgment here is: If you recognize a face, you are over 18 years old and your face value is over 40. Just like + comment.

Concrete realization


'''
 Analyze the obtained data 
face_dict Data after face recognition 
'''


def analysis_face(face_dict):
    #  If you find a face, continue to judge 
    if len(face_dict) != 0:
        #  If it's a girl 
        if face_dict["gender"] == "female":
            print(" Gender: Female ")
            print(" Age: " + face_dict["age"])
            print(" Yan value: " + face_dict["beauty"])
            #  If the face value is 40 Above, and older than 18 Continue 
            if float(face_dict["beauty"]) > 40 and float(face_dict["age"]) > 18:
                #  Praise 
                commentaries()
                print("------------------ Capture little sister 1 Mei ------------------")
                print("------------------ The face value is so high, it has been praised ❤------------")
            else:
                print(" Failure in face value, keep working hard, go down 1 A ")
        #  If it's a boy 
        else:
            print(" Gender: Male ")
            print(" Age: " + face_dict["age"])
            print(" Yan value: " + face_dict["beauty"])
            #  If the face value is 40 Above, and older than 18 Continue 
            if float(face_dict["beauty"]) > 40 and float(face_dict["age"]) > 18:
                #  Praise 
                commentaries()
                print("------------------ Capture little brother 1 Mei ------------------")
                print("------------------ The face value is so high, it has been praised ❤------------")
            else:
                print(" Failure in face value, keep working hard, go down 1 A ")
    else:
        print(" I didn't find my little sister and brother, so I went down 1 A ")
    #  Slide up new video 
    next_video()

How to achieve praise and comment? Yes, use the adb tool. However, there is also an Android studio plug-in-Android Device Monitor. Why use him. Because when we like it, we need to know the specific position of the like button on the screen. Similarly, comments also need to know the coordinate position of the comment input box.

Let's look at how to use Android Device Monitor to obtain the coordinate information of mobile phone screen.

After connecting the mobile phone, take four steps as shown above. You can get the position coordinates of any 1 point on the mobile phone screen. Here, you only need to enter the approximate position coordinates. So where does this plug-in come from? It is said that Android Studio versions before 3.0 come with this plug-in. And I happen to be a version after 3.0, so I need to do some other work. The specific usage is as follows:

Enter the following on the command line in the android-sdk/tools/ directory: monitor. That is, enter the CMD window, cd to the directory where Android-sdk was installed at that time, continue cd to the tools directory, and then enter the command monitor, which is the long-awaited Android Device Monitor. Because the screen size of each mobile phone may not be the same. So the following parameters are only the coordinates measured by my mobile phone.

The following is the implementation of specific praise comments


#  Like comments 
def commentaries():
    os.system("adb shell input tap 1000 1200")  #  Praise 
    time.sleep(0.01)
    os.system("adb shell input tap 1000 1400")  #  Click the comment button 
    time.sleep(0.05)
    os.system("adb shell input tap 50 2000")  #  Get EditText Input box focus 
    os.system("adb shell am broadcast -a ADB_INPUT_TEXT --es msg ' How nice, how nice '")  #  Comments 
    os.system("adb shell input tap 1000 1860")  #  Send comments 
    time.sleep(1)
    os.system("adb shell input tap 500 100")  #  Return to the main interface 

There is another point to note here: adb shell input text does not support Chinese input. You can only input English such as' hello world ', and you need to switch the keyboard to English input mode before entering English. Continue Baidu, how to realize Chinese input, and find another artifact-ADBKeyBoard. apk, an input method written by foreigners. Perfect solution to the problem of not being able to input Chinese. Source address: https://github.com/senzhk/ADBKeyBoard. Install this app. And switch the default input method to ADBKeyBoard.

Finally, remind everyone that Tik Tok will limit the number of comments of an account in a period of time, and long-term and high-frequency comment Tik Tok will temporarily turn off your comment authority. I will give you a sentence "Your speech is too fast, please control how many you send", and I am afraid that someone will brush comments specially. It will be unsealed in just an hour.


Related articles: